Now Volume.export() may be a coroutine and also may be accompanied by
Volume.export_end() cleaning up after it.
See previous commits for building blocks for this.
This commit adjusts usage of Volume.export() and adds matching
Volume.export_end() throughout the code base.
FixesQubesOS/qubes-issues#5935
This is a counterpart to Volume.export(). Up until now, no driver needed
any cleanup after exporting data, but it doesn't mean there won't be
any. This is especially relevant because Volume.export() is supposed to
return a path of a snapshot from before VM start - which may be a
different one than currently active one.
QubesOS/qubes-issues#5935
When Volume.export is called late and can be also a coroutine, it may
make sense to also have a cleanup function for changes made by it.
This commit only adjust backup code internals, but doesn't call
appropriate Volume function yet.
QubesOS/qubes-issues#5935
There are two reasons for this:
- call it from a coroutine, allowing export() itself be a coroutine
- avoid calling export() when only collecting preliminary backup
summary
Both needs some more changes in other parts of the codebase to be useful
(see next commits).
This will be especially useful when export() will need to make some
changes (like, create a snapshot, mount something etc).
QubesOS/qubes-issues#5935
Make all volume's pool controlled by DisposableVM Template. This
specifically makes DispVM's volatile volume to be placed directly in the
same pool as its template.
FixesQubesOS/qubes-issues#5933
Add dummy TestVolume with empty create() method. Other core code
requires also TestPool.get_volume implemented, so add that too (naive
version remembering instances returned from TestPool.init_volume).
Avoid local modification in a pool's init_volume influence
vm.volume_config. Currently every pool driver replaces
volume_config['pool'] with a pool object (instead of name) and it leads
to confusing cases where depending on start stage, it is sometimes an
object and sometimes a string.
Additionally, some pool drivers may modify volume_config in unexpected
way - for example test pool driver removes 'pool' entry entirely. Avoid
this fragile interface by giving pool driver a copy of volume_config,
instead of vm.volume_config directly.
Note one side effect is that 'vid' (and other pool-specific parameters)
is not set into vm.volume_config directly after creating a VM, but
possibly only after loading from XML. This should not be an issue in
theory (no core code should expect it), but if some place use
volume_config instead of Volume instance for getting pool-specific
options, it should be fixed.
Close transport used to wait for user input, otherwise all further tests
would fail on cleanup (FD leak detected). This in practice is only
useful when using wait_on_fail decorator without --failfast option.
Don't update _size in the getter, so it can be unlocked (which is
helpful for QubesOS/qubes-issues#5935).
!!! If cherry-picking for release4.0, also adjust import_data() to !!!
!!! use self.size (no underscore) instead of self._get_size() !!!
Fix load error reporting - make sure 'err' variable is transferred into
'runTest' function scope.
Then, relax test loading requirements - use 'resolve' instead of 'load',
to bypass dependencies check (defined in setup.py of the package). The
required dependencies should be handled by RPM already, and in some
cases may not match those in python package. An example is PDF
converter, where dependencies at python level are set for the actual
converter, which is irrelevant for running tests from dom0 (tests will
interact with PDF converter inside a VM).
Replace constant audio bytearray with generated sine wave sample
to get more robust results across test environments.
Use zero-crossings as an audio fingerprint
and compare it with sine wave frequency.
XSA-320 / CVE-2020-0543 affects Ivy Bridge and later platforms, but a
fix (microcode update) won't be available for Ivy Bridge. Disable
affected instruction (do not announce it in CPUID - complying software
should not use it then).
The revisions_to_keep should be inherited from the pool by default (or
whatever else logic is in the storage pool driver). When creating VM in
a specific pool, volumes config is re-initialized to include right
defaults. But the config cleaning logic in `_clean_volume_config()`
failed to remove revisions_to_keep property initialized by the default
pool driver. This prevented new pool driver to apply its own default
logic.
An extreme result was inability to create a VM in 'file' pool at all,
because it refuses any revisions_to_keep > 1, and the default LVM
pool has revisions_to_keep = 2.
* origin/pr/344:
travis: pip -> pip3
Update .travis.yml
Drop initial root thin pool definition
Prevent double hyphens in thin_pool parsing
Rename default root thin pool from 'lvm' to 'root'
* rename-property-del-reset:
Fire property-reset event when default value might change
Convert handler to use property-reset instead of property-del
Remove leftovers of default_fw_netvm
Deprecate property-del:name events and introduce property-reset:name instead
Those are only some cases, the most obvious ones:
- defaults inherited from a template
- xid and start_time on domain start/stop
- IP related properties
- icon
QubesOS/qubes-issues#5834
There was also one case of triggering property-{del => reset}
synthetically on default value change. Adjust it too and drop -pre-
event call in that case.
QubesOS/qubes-issues#5834
And the same for -pre- events.
The property-del name is really confusing (it makes sense only for those
with deep knowledge of the implementation), because the property isn't
really deleted - it is only reverted to the "default" state (which most
properties have). So, name the event property-reset, intentionally
similar to property-set, as it is also kind of a value change.
Additionally the property-reset event is meant to be called when the
(dynamic) default value changes. Due to the current implementation, it
is a manual process so it can't be guaranteed to be called in all those
cases, but lets try to cover as much as possible.
FixesQubesOS/qubes-issues#5834
...when verifying old backup restored. It wasn't present in the backup,
but its presence is expected in some cases. Properly setting 'servicevm'
feature is tested elsewhere.
Remove intermediate qubesd-query-fast proxy process.
This requires changing socket protocol to match what qrexec is sending
in the header.
FixesQubesOS/qubes-issues#3293
* origin/pr/326:
ext/admin: workaround for extension's __init__() called multiple times
tests: teardown fixes
travis: include core-qrexec in tests
api/admin: (ext/admin) limit listing VMs based on qrexec policy
api/internal: extract get_system_info() function
- wait for the client be listed in dom0
- report parecord stderr
- allow up to 20ms to be missing, to account for potentially suspended
device initially
Make the check if remote file wasn't removed meaningful. Previously the
user didn't have permission to remote the source file, so even if the
tool would try, it would fail.
Various qrexec tests create auxiliary process (service_proc) as a local
variable. In case of test failure, process cleanup isn't called and may
lead to FD leaks and breaking subsequent tests.
Fix this by always saving such process instance in self.service_proc and
cleaning it up in self.tearDown() (this code is already there).
Add also waiting (and in case of timeout - killing) of a service call
process too.
First the main bug: when meminfo xenstore watch fires, in some cases
(just after starting some domain) XS_Watcher refreshes internal list of
domains before processing the event. This is done specifically to
include new domain in there. But the opposite could happen too - the
domain could be destroyed. In this case refres_meminfo() function raises
an exception, which isn't handled and interrupts the whole xenstore
watch loop. This issue is likely to be triggered by killing the domain,
as this way it could disappear shortly after writing updated meminfo
entry. In case of proper shutdown, meminfo-writer is stopped earlier and
do not write updates just before domain destroy.
Fix this by checking if the requested domain is still there just after
refreshing the list.
Then, catch exceptions in xenstore watch handling functions, to not
interrupt xenstore watch loop. If it gets interrupted, qmemman basically
stops memory balancing.
And finally, clear force_refresh_domain_list flag after refreshing the
domain list. That missing line caused domain refresh at every meminfo
change, making it use some more CPU time.
While at it, change "EOF" log message to something a bit more
meaningful.
Thanks @conorsch for capturing valuable logs.
FixesQubesOS/qubes-issues#4890
... during tests.
qubes.ext.Extension class is a weird thing that tries to make each extension
a singleton. But this unfortunately have a side effect that __init__()
is called separately for each "instance" (created in Qubes()'s
__init__()), even though this is really the same object. During normal
execution this isn't an issue, because there is just one Qubes() object
instance. But during tests, multiple objects are created.
In this particular case, it caused PolicyCache() to be created twice and
the second one overriden the first one - without properly cleaning it
up. This leaks a file descriptor (inotify one). The fact that cleanup()
was called twice too didn't helped, because it was really called on
the same object, the one requiring cleanup was already gone.
Workaround this by checking if policy_cache field is initialize and
avoid re-initialize it. Also, on Qubes() object cleanup remove that
field, so it can be properly initialized on the next test iteration.
Add few missing app.close() calls on test teardown.
Fix socket cleanup in TC_00_QubesDaemonProtocol() - not only close the
FD, but also unregister it from asyncio event loop.
Various Admin API calls, when directed at dom0, retrieve global system
view instead of a specific VM. This applies to admin.vm.List (called at
dom0 retrieve full VM list) and admin.Events (called at dom0 listen for
events of all the VMs). This makes it tricky to configure a management
VM with access to limited set of VMs only, because many tools require
ability to list VMs, and that would return full list.
Fix this issue by adding a filter to admin.vm.List and admin.Events
calls (using event handlers in AdminExtension) that filters the output
using qrexec policy. This version evaluates policy for each VM or event
(but loads only once). If the performance will be an issue, it can be
optimized later.
FixesQubesOS/qubes-issues#5509
* origin/pr/330:
gui: fixes from Marek's comments
gui: improvements of feature keyboard layout checks
tests: adapt tests for keyboard-layout
gui: drop legacy qubes-keyboard support
and not mark it as expected failure anymore. Note the removal of the
expected failure isn't just about the changes here, but also about the
actual fix on the qrexec side (ffafd01 "Fix not closed file descriptors in
qubes-rpc-multiplexer" commit in core-qrexec repository).
QubesVM.run_for_stdio() by default captures stderr. In case of call fail
(non-zero return code), captured stderr is included in the exception
object, but isn't printed by default CalledProcessError message.
Make it visible by:
- handling CalledProcessError and including in the test failure message
(when exception is captured already)
- not capturing stderr (if no exception handling is present in the
test)
The main use case for this function is to create qrexec services in VMs.
Since qrexec now require service scripts to be executable, make
create_remote_file() adjust permissions.
If any test-* VMs remains from previous test run, there are removed
before test. self.app doesn't exist at this point, so don't require it
in self.remove_vms().
This commit adds a test case for the QubesVM class's is_fully_usable
method. The verified scenarios are as follows:
* The VM has qrexec enabled, and the qrexec service has been
successfully started.
(The VM becomes "fully usable" in this case.)
* The VM has qrexec enabled, and the qrexec service has failed to start
(Error handling case; the VM is *not* fully usable.)
* The VM does *not* have qrexec enabled.
(The VM becomes "fully usable" in this case.)
Prior to this commit, a properly configured Linux HVM would not
transition from the 'Transient' state to the 'Running' state according
to qvm-ls output, even if the HVM in question had the 'qrexec' feature
disabled.
This issue is caused by an unconditional qrexec check in the
'on_domain_is_fully_usable' method, and is resolved by adding
a check that short-circuits the qrexec check if the aforementioned
feature is not enabled for the VM in question.
* origin/pr/295:
tests: fix tag name in audiovm test
tests: ensure notin while setting Audio/Gui VM
gui: add checks for changing/removing guivm
audio: add checks for changing/removing audiovm
audio/gui: use simply vm.tags instead of list()
tests: fix tests for gui/audio vm
Make pylint happy
gui/audio: fixes from Marek's comments
Allow AudioVM to be ran after any attached qubes
Allow GuiVM to be ran after any attached qubes
xid: ensure vm is not running
tests: fix missing default audiovm and guivm tags
gui, audio: better handling of start/stop guivm/audiovm
gui, audio: ensure guivm and audiovm tag are set
Support for AudioVM