Otherwise deadlock could happen - the script will try to get read lock
on qubes.xml, while the calling tool can already hold the lock. If that
was write lock (which is in case of qfile-daemon-dvm), the deadlock
occurs.
None of found existing portable locking module does support RW locks.
Use lowlevel system locking support - both Windows and Linux support
such feature.
Drop locking code in write_firewall_conf() b/c is is called with
QubesVmCollection lock held anyway.
Some VM types do not have particular disk image. Instead of enumerating
cases in storage class, signal unused image from VM class by setting
appropriate attr to None.
There are still few uses of direct xenstore access, most of them are
xen-specific (so doesn't need to be portable). For now simply don't
connect to xenstore when no 'xen.lowlevel.xs' module present. It will
break such xen-specific accesses - it must be somehow reworked - either
by adding appropriate conditionals, or moving such code somewhere else
(custom methods of libvirt driver?).
There is still use of it: QubesHost.get_free_xen_memory and
QubesHost.measure_cpu_usage. Will migrate them to libvirt later (for now
some things will be broken - namely qubes-manager).
Mostly done. Things still using xenstore/not working at all:
- DispVM
- qubesutils.py (especially qvm-block and qvm-usb code)
- external IP change notification for ProxyVM (should be done via RPC
service)
This makes easier to import right objects in submodules (only one
object). This also implement lazy connection - at first access, not at
module import, which speeds up tools, which doesn't need runtime
information (like qvm-prefs or qvm-service). In the future this will
ease migration from xenstore to QubesDB.
Also implement "offline mode" - operate on qubes.xml without connecting
to VMM - raise exception at such try.
This is needed to run tools during installation, where only minimal
set of services are started, especially no libvirt.
QubesVmCollection.save() overrides qubes.xml by creating new file, then
renaming it over the old one. If any process has that (old) file open
at the same time - especially while waiting on lock_db_for_writing() -
it will end up in accessing old, already unlinked file.
The exact calls would look like:
P1 P2
lock_db_for_writing
fd = open('qubes.xml')
fcntl(fd, F_SETLK, ...)
lock_db_for_writing
fd = open('qubes.xml')
fcntl(fd, F_SETLK, ...)
...
save():
open(temp-file)
write(temp-file, ...)
...
flush(temp-file)
rename(temp-file, 'qubes.xml')
close(fd) // close old file
lock_db_for_writing succeed
*** fd points at already unlinked
file
unlock_db
close(qubes.xml)
To fix that problem, added a check if (already locked) file is still the
same as qubes.xml.