There are some circular dependencies (TemplateVM.appvms,
NetVM.connected_vms, and probably more), which prevents garbage
collector from cleaning them.
FixesQubesOS/qubes-issues#1380
QubesVM.start() first creates domain as paused, completes its setup
(including starting qubesdb-daemon and creating appropriate entries),
then resumes the domain. So wait for that resume to be sure that
`qubesdb-daemon` is already running and populated.
QubesOS/qubes-issues#1110
QubesWatch._register_watches is called from libvirt event callback,
asynchronously to qvm-start. This means that `qubesdb-daemon` may
not be running or populated yet.
If first QubesDB connection (or watch registration) fails, schedule next
try using timers in libvirt event API (as it is base of QubesWatch
mainloop), instead of some sleep loop. This way other events will be
processed in the meantime.
QubesOS/qubes-issues#1110
This makes easier to handle some corner cases. One of them is having
entry without `dir_path` defined. This may happen when migrating from R2
(using backup+restore or in-place) while some DisposableVM was running
(even if not included in the backup itself).
Fixesqubesos/qubes-issues#1124
Reported by @doncohen, thanks @wyory for providing more details.
We use only one device-mapper layer for HVMs, and this isn't the same as
for PV - it is that one, which PV does in initramfs.
Device-mapper layers summary for template-based VMs:
PV: root.img+root-cow.img (dom0) -> xvda, xvda+volatile.img (VM)
HVM: root.img+volatile.img (dom0)
Since libvirt do not support such events (at least for libxl driver), we
need some way to notify qubes-manager when device is attached/detached.
Use the same protocol as for connect/disconnect but on the target
domain.
Define it only when really needed:
- during VM creation - to generate UUID
- just before VM startup
As a consequence we must handle possible exception when accessing
vm.libvirt_domain. It would be a good idea to make this field private in
the future. It isn't possible for now because block_* are external for
QubesVm class.
This hopefully fixes race condition when Qubes Manager tries to access
libvirt_domain (using some QubesVm.*) at the same time as other tool is
removing the domain. Additionally if Qubes Manage would loose that race, it could
define the domain again leaving some unused libvirt domain (blocking
that domain name for future use).
Provide vm.refresh(), which will force to reconnect do QubesDB daemon,
and also get new libvirt object (including new ID, if any). Use this
method whenever QubesDB call returns DisconnectedError exception. Also
raise that exception when someone is trying to talk to not running
QubesDB - instead of returning None.
The statement that unlock_db() is always called directly after save() is
no longer true - tests holds the lock all the time, doing multiple saves
in the middle.
When qfile-dom0-unpacker detects an error, it sends error report to
stdout and terminate (so stdout is closed). That close should be
transferred to the VM process (as EOF on its stdin), which will signal
it to stop sending the data and handle error report.
Also qrexec-client holds the connection until both stdin and
stdout are closed.
So when that EOF is missing, tar2qfile will not detect error report and
still tries to send the data and qrexec-client will hold the
connection while receiving process is long dead.
To prevent that deadlock from happening, close FD in python code, so
qfile-dom0-unpacker will be the last owner of write end of the pipe.
When it closes its stdout, qrexec-client will receive EOF at its stdin.
Otherwise deadlock could happen - the script will try to get read lock
on qubes.xml, while the calling tool can already hold the lock. If that
was write lock (which is in case of qfile-daemon-dvm), the deadlock
occurs.
None of found existing portable locking module does support RW locks.
Use lowlevel system locking support - both Windows and Linux support
such feature.
Drop locking code in write_firewall_conf() b/c is is called with
QubesVmCollection lock held anyway.
Some VM types do not have particular disk image. Instead of enumerating
cases in storage class, signal unused image from VM class by setting
appropriate attr to None.
There are still few uses of direct xenstore access, most of them are
xen-specific (so doesn't need to be portable). For now simply don't
connect to xenstore when no 'xen.lowlevel.xs' module present. It will
break such xen-specific accesses - it must be somehow reworked - either
by adding appropriate conditionals, or moving such code somewhere else
(custom methods of libvirt driver?).
There is still use of it: QubesHost.get_free_xen_memory and
QubesHost.measure_cpu_usage. Will migrate them to libvirt later (for now
some things will be broken - namely qubes-manager).
Mostly done. Things still using xenstore/not working at all:
- DispVM
- qubesutils.py (especially qvm-block and qvm-usb code)
- external IP change notification for ProxyVM (should be done via RPC
service)
This makes easier to import right objects in submodules (only one
object). This also implement lazy connection - at first access, not at
module import, which speeds up tools, which doesn't need runtime
information (like qvm-prefs or qvm-service). In the future this will
ease migration from xenstore to QubesDB.
Also implement "offline mode" - operate on qubes.xml without connecting
to VMM - raise exception at such try.
This is needed to run tools during installation, where only minimal
set of services are started, especially no libvirt.
QubesVmCollection.save() overrides qubes.xml by creating new file, then
renaming it over the old one. If any process has that (old) file open
at the same time - especially while waiting on lock_db_for_writing() -
it will end up in accessing old, already unlinked file.
The exact calls would look like:
P1 P2
lock_db_for_writing
fd = open('qubes.xml')
fcntl(fd, F_SETLK, ...)
lock_db_for_writing
fd = open('qubes.xml')
fcntl(fd, F_SETLK, ...)
...
save():
open(temp-file)
write(temp-file, ...)
...
flush(temp-file)
rename(temp-file, 'qubes.xml')
close(fd) // close old file
lock_db_for_writing succeed
*** fd points at already unlinked
file
unlock_db
close(qubes.xml)
To fix that problem, added a check if (already locked) file is still the
same as qubes.xml.
Since tar multi-archive no longer used, we can simply instruct tar to
pipe output through gzip (or whatever compressor we want). Include used
compressor command in backup header.
Tar multi-volume support is broken when used with sparse files[1], so do
not use it. Instead simply cut the archive manually and concatenate at
restore time. This change require a little modification in restore
process, so make this new backup format ("3"). Also add backup format
version to the header, instead of some guessing code.
For now only cleartext and encrypted backups implemented, compression
will come as a separate commit.
loop device parsing should have "dXpY_style = True" in order to
correctly parse partitions on loop devices.
Reasoning:
==========
Using losetup to create a virtual SD card disk into a loop device and
creating partitions for it results in new devices within an AppVM that
look like: /dev/loop0p1 /dev/loop0p2 and so on.
However as soon as they are created, Qubes Manager rises an exception
and becomes blocked with the following message (redacted):
"QubesException: Invalid device name: loop0p1
at line 639 of file /usr/lib64/python2.7/site-
packages/qubesmanager/main.py
Details:
line: raise QubesException....
func: block_name_to_majorminor
line no.: 181
file: ....../qubes/qubesutils.py
Backups should be safe also for long-term, so change HMAC to SHA512,
which should be usable much longer than SHA1.
See this thread for discussion:
https://groups.google.com/d/msg/qubes-devel/5X-WjdP9VqQ/4zI8-QWd0S4J
Additionally save guessed HMAC in artificial header data (when no real
header exists).