This all either have been migrated to core3, or is not needed anymore.
There is still qvm-tools directory with a few tools that needs to be
migrated, or installed as is.
This way it will work independently from where qrexec-policy tool will
be called (in most cases - from a system service, as root).
This is also very similar architecture to what we'll need when moving to
GUI domain - there GUI part will also be separated from policy
evaluation logic.
QubesOS/qubes-issues#910
Have dm-snapshot of dm-snapshot. The first layer is to "cache" changes
done by base volume holder (TemplateVM in case of root.img), the second
layer is to hold changes do by snapshot volume holder (AppVM in case of
root.img). In case of Linux VMs the second layer is normally done inside
of VM (original volume is exposed read-only). But this does not work for
non-Linux VMs, orr even Linux but without qubes-specific startup
scripts.
This is first part of the change - actual construction of two layers of
dm-snapshot, not plugged in to core scripts yet.
QubesOS/qubes-issues#2256
Otherwise qvm-create-default-dvm may fail to include it in
saved-cows.tar, which will lead to DispVM being not really disposable.
FixesQubesOS/qubes-issues#2200
This is no longer necessary since volatile.img is formated inside the
VM. This also fixes DispVM creation if the user sets a restrictive umask
for root. Maybe related to #2200.
The following list is bollocks. There were many, many more.
Conflicts:
core-modules/003QubesTemplateVm.py
core-modules/005QubesNetVm.py
core/qubes.py
core/storage/__init__.py
core/storage/xen.py
doc/qvm-tools/qvm-pci.rst
doc/qvm-tools/qvm-prefs.rst
qubes/tools/qmemmand.py
qvm-tools/qvm-create
qvm-tools/qvm-prefs
qvm-tools/qvm-start
tests/__init__.py
vm-config/xen-vm-template-hvm.xml
This commit took 2 days (26-27.01.2016) and put our friendship to test.
--Wojtek and Marek
systemd-user-sessions.service is specicically for that, do not use hack
(plymouth-quit.service), which doesn't work when the service is
disabled.
FixesQubesOS/qubes-issues#1250
It may happen (especially when VM doesn't close cleanly and needs to be
killed) that qubesdb-daemon will not notice VM shutdown immediately.
Normally it would stop after 60s timeout, but speed it up in case of
system shutdown
QubesOS/qubes-issues#1425
This is part of fixing qvm-start.
qmemman was moved with minimal touching, mainly module names.
Moved function parsing human-readable sizes from core2. This function is
wrong, because it treats k/M/G as 1024-based, but leave it for now.
This is required to create VMs in process of building Live system, where
libvirt isn't running.
Additionally there is no udev in the build environment, so needs to
manually create /dev/loop*p* based on sysfs info.
When user logins, login script will try to connect all guid to all the
running VMs. If VMs are still booting at this stage, will never
automatically get its guid (until user tries to start some program
there). This can for example lead to lack of nm-applet icon.
When called from libvirt->libxl, there is libvirt lock taken on that
domain. Because of that, we can't access libvirt domain, so basically
any runtime information. Without that --offline-mode, script waited on
the lock and then was killed by libxl after a timeout - before actually
committing the changes.
Mostly done. Things still using xenstore/not working at all:
- DispVM
- qubesutils.py (especially qvm-block and qvm-usb code)
- external IP change notification for ProxyVM (should be done via RPC
service)
Forking daemon after initializing hypervisor connection can cause
problems (and actually does in case of libvirt).
To notify systemd when daemon is ready use notify socket (previously it
was termination of parent process).
Not only refresh the info about mounted devices, but also check for
others - detected before xenstored was running. Because of recent change
in udev rules (adding flock) it shouldn't deadlock now.
It is common for both dom0 and VM, and also quite linux-specific
(other OSes will need other implementation). So move to linux-specific
repo (not dom0-specific).