qubesd do start other daemons - make sure they will not try to signal
systemd about it. In some cases such daemons (qubesdb-daemon) behave
differently based on this variable.
Do not initialize it only at qubes.xml load time, but re-read vm.kernel
property each time the path is constructed. While at it, add support for
vm.kernel set to 'None' - simply don't include modules.img (xvdd) then.
When device extension do not return some "persistent" device as
currently attached, still return it, as it will be attached at next
domain startup. User can distinguish such devices by having
frontend_domain=None (or other VM).
Also, return a set from DeviceCollection.assignments().
While at it, adjust implementation to specification: tags don't have
value, only one bit of information (present/not present).
FixesQubesOS/qubes-issues#2686
This requires creating LVM volume group, so create on based on loop dev
in /tmp.
This is rather rough, but if any of this fails, run the tests anyway -
it will simply skip LVM tests.
Device ident may contain only characters allowed in qrexec argument.
This will allow using it directly in qrexec argument in Attach/Detach
methods.
This also means PCI extension will need to be updated (it uses ':' in
ident).
QubesOS/qubes-issues#853
This is required to get shutdown notification, when it wasn't initiated
by qubesd (for example 'poweroff' command inside of VM).
Libvirt event loop implementation must be registered before making
connection to libvirt, so move it to the beginning of main().
For now, only 'domain-shutdown' event is emited.
User session may not be started at all (for example no qubes packages
installed there), so don't block it in all the cases. Also this would
prevent running 'qubes.WaitForSession' service...
In practice, default value for 'gui' argument is False, so in most cases
user session will be ignored. Which doesn't matter in most cases -
especially for services called by qubesd.
Keep it uniform - QubesVM() object is responsible for handling
vm.dir_path, Storage() is responsible for handling disk volumes (which
may live in that directory
QubesOS/qubes-issues#2256
Allow specific pool implementation to provide asynchronous
implementation. vm.storage.* methods will detect if given implementation
is synchronous or asynchronous and will act accordingly.
Then it's up to pool implementation how asynchronous should be achieved.
Do not force it using threads (`run_in_executor()`). But pool
implementation is free to use threads, if consider it safe in a
particular case.
This commit does not touch any pool implementation - all of them are
still synchronous.
QubesOS/qubes-issues#2256
Split actual filtering done by mgmt-permission: events into calling an
event and applying returned filters. This way filtering done in
mgmt.Events handler could reuse the same function.
...in core-mgmt-client repository. qubesd isn't the right place to start
GUI applications, which will be even more important when GUI domain will
be something different than Dom0.
QubesOS/qubes-issues#833
Some methods inherited from dict (pop and setdefault here) are covered
by placeholders raising NotImplementedError. Lets fix their signatures
(to match those of dict) to really get NotImplementedError, instead of
TypeError.
This allows avoid race condition between registering event handlers and
performing some action. The important thing is the event sent after
registering event handlers in qubesd. This means state changes (like
VM start/stop) after 'connection-established' event will be included in
event stream.
QubesOS/qubes-issues#2622
Management API gives access only to qubes.property. And this is
actually a good thing, so instead of extending it to access also
builtins.property, add a simple decorator to define read-only, stateless
qubes.property.
QubesOS/qubes-issues#2622
qvm-ls tool (as all other tools) will be accessing properties through
API, so no need (nor sense) for this tool-specific attributes in
qubes.property. The only somehow used was ls_width, and in fact it made
the output unnecessary wide.
The tool itself is already moved to core-mgmt-client repository.
QubesOS/qubes-issues#853
Standard methods return only one value, after operation is completed,
but events-related methods may return multiple values during the method
execution time. Provide a callback for such cases.
Also, according to specification, avoid sending both event and non-event
values.
QubesOS/qubes-issues#2622
Allow method handler to decide if operation could be cancelled. If yes,
when connection to the qubesd is terminated (and
protocol.connection_lost get called) the operation is cancelled using
standard asyncio method - in which case asyncio.CancelledError is thrown
inside method handler. This needs to be explicitly enabled, because
cancellable methods are much harder to write, to maintain consistent
system state.
Caveat: protocol.connection_lost is called only when trying to send some
data to it (and it's already terminated). Which makes this whole
mechanism useful only for events. Otherwise, when sending some data (and
possibly detecting that connection is broken), the operation is already
completed.
QubesOS/qubes-issues#2622
* kalkin/device-assignments: (21 commits)
PCI extension cache PCIDevice objects
Make pylint ♥
Fix pylint warning no-else-return
Fix pylint warning len-as-conditional
device-list-attached event returns a dev/options tupples list
DeviceAssignment options are now a dict
Remove WrongAssignment exception
Rename qubes.devices.BlockDevice to qubes.storage.BlockDevice
Update relaxng devices option element
Fix tests broken by the new assignment api
Fix qubes.tests.devices
Fix pci device integration tests
qvm-device add support for assignments
Update ext/pci to new api
BaseVM add DeviceAssignment xml serialization
Migrate DeviceCollection to new API
Add PersistentCollection helper to qubes.devices
Add DeviceAssignment
qvm-device validates device parameters
qvm-device fix handling of non block devices
...
* core3-policy:
Make pylint happy
tests: disable GTK tests on travis
qubespolicy: make pylint happy
qubespolicy: run GUI code inside user session and expose it as dbus object
tests: plug rpc-window tests into main test runner
qubespolicy: plug GUI code into qrexec-policy tool
rpm: add rpc-window related files to package
rpc-window: adjust for qubespolicy API
rpc-window: use pkg_resources for glade file
rpc-window: use 'edit-find' icon if no other is found
rpc-window: adjust for python3
rpc-window: code style adjustments
Import new rpc confirmation window code
qubesd: add second socket for in-dom0 internal calls
policy: qrexec-policy cli tool
tests: qubespolicy tests
qubespolicy: initial version for core3
vm/appvm: add dispvm_allowed property
dispvm: don't load separate Qubes() instance when handling DispVM
This socket (and commands) are not exposed to untrusted input, so no
need to extensive sanitization. Also, there is no need to provide a
stable API here, as those methods are used internally only.
QubesOS/qubes-issues#853
0) All those methods are now awaitable rather than synchronous.
1) The base method is run_service(). The method run() was rewritten
using run_service('qubes.VMShell', input=...). There is no provision
for running plain commands.
2) Get rid of passio*= arguments. If you'd like to get another return
value, use another method. It's as simple as that.
See:
- run_service_for_stdio()
- run_for_stdio()
Also gone are wait= and localcmd= arguments. They are of no use
inside qubesd.
3) The qvm-run tool and tests are left behind for now and will be fixed
later. This is because they also need event loop, which is not
implemented yet.
fixesQubesOS/qubes-issues#1900QubesOS/qubes-issues#2622
- Get rid of @not_in_api, exchange for explicit @api() decorator.
- Old @no_payload decorator becomes an argument (keyword-only).
- Factor out AbstractQubesMgmt class to be a base class for other mgmt
backends.
- Use async def instead of @asyncio.coroutine.
QubesOS/qubes-issues#2622
This also means we don't check if a VM with given name (in case of
VMProperty) exists in the system, at this stage. But this is ok, lets
not duplicate work of property setter.
QubesOS/qubes-issues#2622
If kwargs contains dict as one of values, it isn't hashable and can't be
used as value in frozenset/tuple. Convert such values into
frozenset(dict.items()). Only one (more) level is supported, but it
should be enough.
Solution from http://stackoverflow.com/a/13264725
In theory any call could modify config (through events), but lets keep
writes to qubes.xml low. In any case, qubes.xml will be eventually
written (either at next config-modifying call, or daemon exit).
Sanitization of input value is tricky here, and also very important at
the same time. If property define value type (and it's something more
specific than 'str'), use that. Otherwise allow only printable ASCII
characters, and let appropriate event and setter handle value.
At this point I've reviewed all QubesVM properties in this category and
added appropriate setters where needed.
QubesOS/qubes-issues#2622
Don't allow characters potentially interfering with qrexec. To be on the
safe side, allow only alphanumeric characters + very few selected
punctuations.
Split it into two functions: validate_name - context-less verification,
and actual _setter_name which perform additional verification in
context of actual VM.
Switch to qubes.exc.* exceptions where appropriate.
This reverts commit 0f1672dc63.
Bring it back. Lets not revert the whole feature just because required
package exists only in qubes-builder, not in some online repository.
Also, this revert didn't go as planned - there was a reference to a
'passphrase' local variable, but it wasn't assigned any value.
Cc: @woju
This will allow more flexible API usage, especially when using mgmt API
- we need to use VM type as string there.
We don't lose any flexibility here - VM class names needs to be uniquely
identified by a string (used in qubes.xml) anyway.
Use the right cow image and apply the second layer to provide read-write
access. The correct setup is:
- base image + base cow -> read-only snapshot (base changes "cached"
until committed)
- read-only snapshot + VM cow -> read-write snapshot (changes discarded
after VM shutdown)
This way, even VM without Qubes-specific startup scripts will can
benefit from Template VMs, while VMs with Qubes-specific startup scripts
may still see original root.img content (for possible signature
verification, when storage domain got implemented).
QubesOS/qubes-issues#2256
QubesDB daemon no longer remove socket created by new instance, so one
part of VM restart race condition is solved. The only remaining part is
to ensure that we really connect to the new instance, instead of talking
to the old one (soon to be terminated).
FixesQubesOS/qubes-issues#1694
None of properties set there do any "dangerous thing" for filesystem
permissions (at least for now), so do not require it. This is mostly to
keep compatibility with %post rpm scripts (kernel-qubes-vm at least).
QubesOS/qubes-issues#2412
This tool by design is called as root, so try to:
- switch to normal user if possible
- fix file permissions afterwards - if not
QubesOS/qubes-issues#2412
There was a comment '# Set later', but actually values were never set.
This break adding just installed template (qvm-template-postprocess).
QubesOS/qubes-issues#2412
When system is installed with LVM thin pool, it should be used by
default. But lets keep file-based on for /var/lib/qubes for some corner
cases, migration etc.
QubesOS/qubes-issues#2412
This is intended to call to finish template installation/removal.
Template RPM package is basically container for root.img, nothing more.
Other parts needs to be generated after root.img extraction. Previously
it was open coded in rpm post-install script, but lets keep it as qvm
tool to ease supporting multiple version in template builder
QubesOS/qubes-issues#2412
VM files may be already removed. Don't fail on this while removing a
VM, it's probably the reason why domain is being removed.
qvm-remove tool have its own guard for this, but it isn't enough - if
rmtree(dir_path) fails, storage.remove() would not be called, so
non-file storages would not be cleaned up.
This is also needed to correctly handle template reinstallation - where
VM directory is moved away to call create_on_disk again.
QubesOS/qubes-issues#2412
'-' is invalid character in python identifier, so all the properties
have '_'. But in previous versions qvm-* tools accepted names with '-',
so lets not break this.
QubesOS/qubes-issues#2412
/var/log/qubes directory have setgid set, so all the files will be owned
by qubes group (that's ok), but there is no enforcement of creating it
group writable, which undermine group ownership (logs created by root
would not be writable by normal user)
QubesOS/qubes-issues#2412
In case of LVM (at least), "internal" flag is initialized only when
listing volume attached to given VM, but not when listing them from the
pool. This looks like a limitation (bug?) of pool driver, it looks like
much nicer fix is to handle the flag in qvm-block tool (which list VMs
volumes anyway), than in LVM storage pool driver (which would need to
keep second copy of volumes list - just like file driver).
QubesOS/qubes-issues#2256
There are mutiple cases when snapshots are inconsistently created, for
example:
- "-back" snapshot created from the "new" data, instead of old one
- "-snap" created even when volume.snap_on_start=False
- probably more
Fix this by following volume.snap_on_start and volume.save_on_stop
directly, instead of using abstraction of old volume types.
QubesOS/qubes-issues#2256
Just calling pool.init_volume isn't enough - a lot of code depends on
additional data loaded into vm.storage object. Provide a convenient
wrapper for this.
At the same time, fix loading extra volumes from qubes.xml - don't fail
on volume not mentioned in initial vm.volume_config.
QubesOS/qubes-issues#2256
- add missing lvm remove call when commiting changes
- delay creating volatile image until domain startup (it will be created
then anyway)
- reset cache only when really changed anything
- attach VM to the volume (snapshot) created for its runtime - to not
expose changes (for example in root volume) to child VMs until
shutdown
QubesOS/qubes-issues#2412QubesOS/qubes-issues#2256
The wrapper doesn't do anything else than translating command
parameters, but it's load time is significant (because of python imports
mostly). Since we can't use python lvm API from non-root user anyway,
lets drop the wrapper and call `lvm` directly (or through sudo when
necessary).
This makes VM startup much faster - storage preparation is down from
over 10s to about 3s.
QubesOS/qubes-issues#2256
...instead of manual copy in python. DD is much faster and when used
with `conv=sparse` it will correctly preserve sparse image.
QubesOS/qubes-issues#2256
Set parameters for possibly hiding domain's real IP before attaching
network to it, otherwise we'll have race condition with vif-route-qubes
script.
QubesOS/qubes-issues#1143
This is the IP known to the domain itself and downstream domains. It may
be a different one than seen be its upstream domain.
Related to QubesOS/qubes-issues#1143`
This helps hiding VM IP for anonymous VMs (Whonix) even when some
application leak it. VM will know only some fake IP, which should be set
to something as common as possible.
The feature is mostly implemented at (Proxy)VM side using NAT in
separate network namespace. Core here is only passing arguments to it.
It is designed the way that multiple VMs can use the same IP and still
do not interfere with each other. Even more: it is possible to address
each of them (using their "native" IP), even when multiple of them share
the same "fake" IP.
Original approach (marmarek/old-qubes-core-admin#2) used network script
arguments by appending them to script name, but libxl in Xen >= 4.6
fixed that side effect and it isn't possible anymore. So use QubesDB
instead.
From user POV, this adds 3 "features":
- net/fake-ip - IP address visible in the VM
- net/fake-gateway - default gateway in the VM
- net/fake-netmask - network mask
The feature is enabled if net/fake-ip is set (to some IP address) and is
different than VM native IP. All of those "features" can be set on
template, to affect all of VMs.
Firewall rules etc in (Proxy)VM should still be applied to VM "native"
IP.
FixesQubesOS/qubes-issues#1143
Core3 keep information whether property have default value for all the
properties (not only few like netvm or kernel). Try to use this feature
as much as possible.
When user included/excluded some VMs for restoration, it may be
neceesarry to fix dependencies between them (for example when default
template is no longer going to be restored).
Also fix handling conflicting names.
Now, when file name is also integrity protected (prefixed to the
passphrase), we can make sure that input files are given in the same
order. And are parts of the same VM.
QubesOS/qubes-issues#971
This prevent switching parts of backup of the same VM between different
backups made by the same user (or actually: with the same passphrase).
QubesOS/qubes-issues#971
`openssl dgst` and `openssl enc` used previously poorly handle key
stretching - in case of `openssl enc` encryption key is derived using
single MD5 iteration, without even any salt. This hardly prevent
brute force or even rainbow tables attacks. To make things worse, the
same key is used for encryption and integrity protection which ease
brute force even further.
All this is still about brute force attacks, so when using long, high
entropy passphrase, it should be still relatively safe. But lets do
better.
According to discussion in QubesOS/qubes-issues#971, scrypt algorithm is
a good choice for key stretching (it isn't the best of all existing, but
a good one and widely adopted). At the same time, lets switch away from
`openssl` tool, as it is very limited and apparently not designed for
production use. Use `scrypt` tool, which is very simple and does exactly
what we need - encrypt the data and integrity protect it. Its archive
format have own (simple) header with data required by the `scrypt`
algorithm, including salt. Internally data is encrypted with AES256-CTR
and integrity protected with HMAC-SHA256. For details see:
https://github.com/tarsnap/scrypt/blob/master/FORMAT
This means change of backup format. Mainly:
1. HMAC is stored in scrypt header, so don't use separate file for it.
Instead have data in files with `.enc` extension.
2. For compatibility leave `backup-header` and `backup-header.hmac`. But
`backup-header.hmac` is really scrypt-encrypted version of `backup-header`.
3. For each file, prepend its identifier to the passphrase, to
authenticate filename itself too. Having this we can guard against
reordering archive files within a single backup and across backups. This
identifier is built as:
backup ID (from backup-header)!filename!
For backup-header itself, there is no backup ID (just 'backup-header!').
FixesQubesOS/qubes-issues#971
Have a generic function `handle_streams`, instead of
`wait_backup_feedback` with open coded process names and manual
iteration over them.
No functional change, besides minor logging change.
Use just introduced tar writer to archive content of LVM volumes (or
more generally: block devices). Place them as 'private.img' and
'root.img' files in the backup - just like in old format. This require
support for replacing file name in tar header - another thing trivially
supported with tar writer.
tar can't write archive with _contents_ of block device. We need this to
backup LVM-based disk images. To avoid dumping image to a file first,
create a simple tar archiver just for this purpose.
Python is not the fastest possible technology, it's 3 times slower than
equivalent written in C. But it's much easier to read, much less
error-prone, and still process 1GB image under 1s (CPU time, leaving
along actual disk reads). So, it's acceptable.
Old backup metadata (old qubes.xml) does not contain info about
individual volume sizes. So, extract it from tar header (using verbose
output during restore) and resize volume accordingly.
Without this, restoring volumes larger than default would be impossible.
To ease all this, rework restore workflow: first create QubesVM objects,
and all their files (as for fresh VM), then override them with data
from backup - possibly redirecting some files to new location. This
allows generic code to create LVM volumes and then only restore its
content.
1. Add a helper function on vm.storage. This is equivalent of:
vm.storage.get_pool(vm.volumes[name]).export(vm.volumes[name])
2. Make sure the path returned by `export` on LVM volume is accessible.
First part - handling firewall.xml and rules formatting.
Specification on https://qubes-os.org/doc/vm-interface/
TODO (for dom0):
- plug into QubesVM object
- expose rules in QubesDB (including reloading)
- drop old functions (vm.get_firewall_conf etc)
QubesOS/qubes-issues#1815
Instead of excerpt from /proc/meminfo, use just one integer. This make
qmemman handling much easier and ease implementation for non-Linux OSes
(where /proc/meminfo doesn't exist).
For now keep also support for old format.
FixesQubesOS/qubes-issues#1312
There is no point in changing *public API* for just a change without any
better reason. It turned out most of those settings will be the same in
Qubes 4.0, so keep names the same.
This reverts commit 2d6ad3b60c.
QubesOS/qubes-issues#1812
This is migration of core2 commits:
commit d0ba43f253
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date: Mon Jun 6 02:21:08 2016 +0200
core: start guid as normal user even when VM started by root
Another attempt to avoid permissions-related problems...
QubesOS/qubes-issues#1768
commit 89d002a031
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date: Mon Jun 6 02:19:51 2016 +0200
core: use runuser instead of sudo for switching root->user
There are problems with using sudo in early system startup
(systemd-logind not running yet, pam_systemd timeouts). Since we don't
need full session here, runuser is good enough (even better: faster).
commit 2265fd3d52
Author: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
Date: Sat Jun 4 17:42:24 2016 +0200
core: start qubesdb as normal user, even when VM is started by root
On VM start, old qubesdb-daemon is terminated (if still running). In
practice it happen only at VM startart (shutdown and quickly start
again). But in that case, if the VM was started by root, such operation
would fail.
So when VM is started by root, make sure that qubesdb-daemon will be
running as normal user (the first user in group 'qubes' - there should
be only one).
FixesQubesOS/qubes-issues#1745
Commit from core2:
commit 94d52a13e7
core: adjust guid parameters when running on KDE5
On KDE5 native decoration plugin is used and requires special properties
set (instead of `_QUBES_VMNAME` etc).
Special care needs to be taken when detecting environment, because
environment variables aren't good enough - this script may be running
with cleared environment (through sudo, or from systemd). So check
properties of X11 root window.
QubesOS/qubes-issues#1784
* core3-devices:
Fix core2migration and tests for new devices API
tests: more qubes.devices tests
qubes/ext/pci: implement pci-no-strict-reset/BDF feature
qubes/tools: allow calling qvm-device as qvm-devclass (like qvm-pci)
qubes: make pylint happy
qubes/tools: add qvm-device tool (and tests)
tests: load qubes.tests.tools.qvm_ls
tests: PCI devices tests
tests: add context manager to catch stdout
qubes/ext/pci: move PCI devices handling to an extension
qubes/devices: use more detailed exceptions than just KeyError
qubes/devices: allow non-persistent attach
qubes/storage: misc fixes for VM-exposed block devices handling
qubes: new devices API
FixesQubesOS/qubes-issues#2257
Instead of old per-VM flag 'pci_strictreset', now implement this as
per-device flag using features. To not fail on particular device
assignment set 'pci-no-strict-reset/DEVICE-BDF' to True. For
example 'pci-no-strict-reset/00:1b.0'.
QubesOS/qubes-issues#2257
Implement required event handlers according to documentation in
qubes.devices.
A modification of qubes.devices.DeviceInfo is needed to allow dynamic,
read-only properties.
QubesOS/qubes-issues#2257
Add 'backenddomain' element when source (not target) domain is not dom0.
Fix XML elemenet name. Actually set volume.domain when listing
VM-exposed devices.
QubesOS/qubes-issues#2256
Allow device plugin to list attached and available devices. Enforce
at API level every device being exposed by some domain.
This commit only changes devices API, but not update existing users
(pci) yet.
QubesOS/qubes-issues#2257
This script should run as fast as possible, so avoid importing large
module. In fact the only used thing was argparse wrapper, so switch to
the standard one and drop aliases.
QubesOS/qubes-issues#2256
Some tests do not apply, as there is no savefile and attributes
propagation is much simpler. Dropped tests:
- test_000_firewall_propagation
- test_001_firewall_propagation
- test_000_prepare_dvm
QubesOS/qubes-issues#2253
- fix assigning 'template' property - do not do it if VM already have it
set
- cap default maxmem at 4000, as we clamp it to 10*memory anyway (and
default memory is 400)
- DispVM is no longer a special case for storage
- Add missing 'rw=True' for volatile volume
- Handle storage initialization (copy&paste from AppVM)
- Clone properties from DispVM template
QubesOS/qubes-issues#2253
It is useful on some cases to prevent talking to hypervisor.
Warning - it may have sense only when action do not access any runtime
VM status. For example running the domain will fail, but changing its
properties should work.
Do not access vm.libvirt_domain after it being already removed - this
will redefine it again in libvirt, just to undefine it in a moment.
On the other hand, few lines below there is fallback libvirt cleanup, in
case of proper one not working.
Since "qubes: fix event framework", handlers from extensions looks the
same as from the VM class itself, so it isn't possible to order them
correctly. Specification says:
For each class first are called bound handlers (specified in class
definition), then handlers from extensions. Aside from above,
remaining order is undefined.
So, restore this property, which is later correctly used to order
handlers.
When properly set, applications will have a chance to automatically
detect HiDPI and act accordingly. This is the case for Fedora 23
template and GNOME apps (maybe even all built on top of GTK).
But for privacy reasons, don't provide real values, only some
approximate one. Give enough information to distinguish DPI above 150,
200 and 300. This is some compromise between privacy and HiDPI support.
QubesOS/qubes-issues#1951
This commit is migrated from gui-daemon repository
(dec462795d14a336bf27cc46948bbd592c307401).
Now failure to load external tests shows in which entry point the error
happened and a useful traceback. The traceback extends from the "try"
statement down to the actual error line, but it does not include the
frames above, ie. from the invocation to the load_tests routine. This is
a limitation of Python itself and usually not a problem.
This directory is not only for disk images (in fact disk images may be
elsewhere depending on choosen volume pool), so it would be cleaner to
handle (create/remove) it directly in QubesVM class.
1. It is unclear yet whether dispvm_netvm will be implemented in core3, but
probably not.
2. Remove tests for setting memory/cpu above host resouces - rejecting
those values at property set time would break backup restore on some
machines (when migrating from bigger to smaller system).
In some places full volume object was called, in others - just file
path. Since this function is also used in some volume init/teardown, use
path everywhere.
To successfully load all the data, proceed in order:
- set app.default_kernel
- load all templates
- set app.default_template
- load other VMs
- update network dependencies between VMs
- set other global properties
Apparently the most important (the only?) property required in offline
mode is "is_running". So let's patch it to return False and make sure
any other libvirt usage would result in failure.
Or maybe better simply returh False in vm.is_running, when libvirt
connection fails? But then it would not be possible to use offline mode
and have (some, probably unrelated) libvirtd running at the same time.
FixesQubesOS/qubes-issues#2008
Check directly vm.template, throwing AttributeError when not found.
There may be some value in converting it to more descriptive error, but
since that's mostly for internal users (not user facing actions) don't
bother for now.
QubesOS/qubes-issues#1842
It may make sense to create 'snapshot' volume out of already 'snapshot',
not only 'origin'. In pracice it will exactly the same as 'snapshot
connected directly to 'origin'.
QubesOS/qubes-issues#866
- Use full import paths in qvm-pool
- Add, Remove, Info and List options set `Namespace.command`. This fixes a crash
when `-o dir_path=/mnt/foo` is specified after `-a foo xen`.
- Remove `_List`
- Remove 'added pool' and 'removed pool' messages. Unix tools are quiet
- qvm-pool call app.save()
- Rename create_parser in get_parser
- Rename local_parser variables to just parser
- qvm-pool uses print_table