Deduplicate entries when iterating over TestVMsCollection values. Some
tests add given VM multiple times, to have it available under different
kind of keys (name, uuid etc) - similar to the real VMsCollection.
* origin/pr/254:
vm: allow StandaloneVM to be a DVM template
vm: do not allow setting template_for_dispvms=False if there are any DispVMs
vm: move DVM template specific code into separate mixin
* test-fixes20200806:
tests/extra: add vm.run(..., gui=) argument
tests: collect detailed diagnostics on failure
tests: workaround a race in qrexec test
tests: fix audio recording test
tests: make qvm-sync-clock test more reliable
Help debugging test failures by collecting detailed information on
failure. It will be logger to the standard logger, which will end up
either on stderr or in journalctl.
qrexec-client-vm may return earlier than it's child process (it exits
right away, without waiting for its child). Add a small wait before
reading exit code from a file.
* paranoid-restore:
tests: paranoid backup restore
Add policy for paranoid mode backup restore
Add an extension preventing starting a VM while it's being restored
Add support for 'tag-created-vm-with' feature
To calculate frequency it needs to use samples per second (44100), not
samples pre recording lenght. This caused shorter recordings to not fit
into the margin.
Policy allows a VM with 'backup-restore-mgmt' tag to create VMs, and
then manage VMs with 'backup-restore-in-progress' tag (which is added by
AdminExtension, based on 'tag-created-vm-with' feature).
VM with 'backup-restore-mgmt' tag can also call qubes.RestoreById
service to a VM with 'backup-restore-storage' tag. This service allows
to retrieve backup archive.
QubesOS/qubes-issues#5310
Compare the time with the "current" time retrieved from ClockVM just
before comparing, not with the test start time. This should work even if
the test machine is quite slow (test taking more than 30s).
Do not allow starting a VM while the restoring management VM has still
control over it. Specifically, that restoring VM will not be able to
start just restored VM.
QubesOS/qubes-issues#5310
When a VM with 'tag-created-vm-with' feature set creates a VM (using
Admin API), that VM will get all the tags listed in the feature.
Multiple tags can be separated with spaces.
This will be useful to tag VMs created during paranoid mode backup
restore.
QubesOS/qubes-issues#5310
* devel20200705:
tests: skip gnome-terminal on xfce template flavor
tests: fix FD leak in qrexec test
tests: switch default LVM pool to qubes_dom0/vm-pool
backup: fix error handler for scrypt errors
Adjust code for possibly coroutine Volume.export() and Volume.export_end()
storage: add Volume.export_end() function
backup: add support for calling a function after backing up a file/volume
backup: call volume.export() just before actually extracting it
vm/dispvm: place all volumes in the same pool as DispVM's template
tests: extend TestPool storage driver to make create_on_disk working
storage: pass a copy of volume_config to pool.init_volume
tests: cleanup properly in wait_on_fail decorator
* origin/pr/352:
audio: set sink volume to workaround alsa save/restore issue
audio: increase timeout to match hvm loading
audio: auxiliary pauses should be harmless now, place them back just in case
audio: add silence threshold
audio: unload guest' module-vchan-sink in hvm tests
audio: fix prepare_audio_vm
audio: do not use pacat on copying audio_in.raw
Audio: rework audio tests
Similart to property-reset:xid, emit property-reset:stubdom_xid when
domain is started/stopped. This allows client side of the Admin API
(qubes-core-admin-client) to invalidate the cache when necessary.
Found by audio tests: #352
Now Volume.export() may be a coroutine and also may be accompanied by
Volume.export_end() cleaning up after it.
See previous commits for building blocks for this.
This commit adjusts usage of Volume.export() and adds matching
Volume.export_end() throughout the code base.
FixesQubesOS/qubes-issues#5935
This is a counterpart to Volume.export(). Up until now, no driver needed
any cleanup after exporting data, but it doesn't mean there won't be
any. This is especially relevant because Volume.export() is supposed to
return a path of a snapshot from before VM start - which may be a
different one than currently active one.
QubesOS/qubes-issues#5935
When Volume.export is called late and can be also a coroutine, it may
make sense to also have a cleanup function for changes made by it.
This commit only adjust backup code internals, but doesn't call
appropriate Volume function yet.
QubesOS/qubes-issues#5935
There are two reasons for this:
- call it from a coroutine, allowing export() itself be a coroutine
- avoid calling export() when only collecting preliminary backup
summary
Both needs some more changes in other parts of the codebase to be useful
(see next commits).
This will be especially useful when export() will need to make some
changes (like, create a snapshot, mount something etc).
QubesOS/qubes-issues#5935
Make all volume's pool controlled by DisposableVM Template. This
specifically makes DispVM's volatile volume to be placed directly in the
same pool as its template.
FixesQubesOS/qubes-issues#5933
Add dummy TestVolume with empty create() method. Other core code
requires also TestPool.get_volume implemented, so add that too (naive
version remembering instances returned from TestPool.init_volume).
Avoid local modification in a pool's init_volume influence
vm.volume_config. Currently every pool driver replaces
volume_config['pool'] with a pool object (instead of name) and it leads
to confusing cases where depending on start stage, it is sometimes an
object and sometimes a string.
Additionally, some pool drivers may modify volume_config in unexpected
way - for example test pool driver removes 'pool' entry entirely. Avoid
this fragile interface by giving pool driver a copy of volume_config,
instead of vm.volume_config directly.
Note one side effect is that 'vid' (and other pool-specific parameters)
is not set into vm.volume_config directly after creating a VM, but
possibly only after loading from XML. This should not be an issue in
theory (no core code should expect it), but if some place use
volume_config instead of Volume instance for getting pool-specific
options, it should be fixed.
Close transport used to wait for user input, otherwise all further tests
would fail on cleanup (FD leak detected). This in practice is only
useful when using wait_on_fail decorator without --failfast option.