admin.pool.UsageDetails reports the usage data, unlike
admin.pool.Info, which should report the config/unchangeable data.
At the moment admin.Pool.Info still reports usage, to maintain
compatibility, but once all relevant tools are updated,
it should just return configuration data.
Added usage_details method to Pool class
(returns a dictionary with detailed information
on pool usage) and LVM implementation that returns
metadata info.
Needed for QubesOS/qubes-issues#5053
Disk usage may change dynamically not only at VM start/stop. Refresh the
size cache before checking usage property, but no more than once every
30sec (refresh interval of disk space widget)
FixesQubesOS/qubes-issues#4888
some-vm-root is a valid VM name, and in that case it's volume can be
named some-vm-root-private. Do not let it confuse revision listing,
check for unexpected '-' in volume revision number.
The proper solution would be to use different separator, that is not
allowed in VM names. But that would require migration code that is
undesired in the middle of stable release life cycle.
FixesQubesOS/qubes-issues#4680
LVM operations can take significant amount of time. This is especially
visible when stopping a VM (`vm.storage.stop()`) - in that time the
whole qubesd freeze for about 2 seconds.
Fix this by making all the ThinVolume methods a coroutines (where
supported). Each public coroutine is also wrapped with locking on
volume._lock to avoid concurrency-related problems.
This all also require changing internal helper functions to
coroutines. There are two functions that still needs to be called from
non-coroutine call sites:
- init_cache/reset_cache (initial cache fill, ThinPool.setup())
- qubes_lvm (ThinVolume.export()
So, those two functions need to live in two variants. Extract its common
code to separate functions to reduce code duplications.
FixesQubesOS/qubes-issues#4283
* qubesos/pr/228:
storage/lvm: filter out warning about intended over-provisioning
tests: fix getting kernel package version inside VM
tests/extra: add start_guid option to VMWrapper
vm/qubesvm: fire 'domain-start-failed' event even if fail was early
vm/qubesvm: check if all required devices are available before start
storage/lvm: fix reporting lvm command error
storage/lvm: save pool's revision_to_keep property
Do not write directly to main volume, instead create temporary volume
and only commit it to the main one when operation is finished. This
solve multiple problems:
- import operation can be aborted, without data loss
- importing new data over existing volume will not leave traces of
previous content - especially when importing smaller volume to bigger
one
- import operation can be reverted - it create separate revision,
similar to start/stop
- easier to prevent qube from starting during import operation
- template still can be used when importing new version
QubesOS/qubes-issues#2256
First rename volume to backup revision, regardless of revisions_to_keep,
then rename -snap to current volume. And only then remove backup
revision (if exceed revisions_to_keep). This way even if commit
operation is interrupted, there is still a volume with the data.
This requires also adjusting few functions to actually fallback to most
recent backup revision if the current volume isn't found - create
_vid_current property for this purpose.
Also, use -snap volume for clone operation and commit it normally later.
This makes it safer to interrupt or even revert.
QubesOS/qubes-issues#2256
Scripts do parse its output sometimes (especially `lvs`), so make sure
we always gets the same format, regardless of the environment. Including
decimal separator.
FixesQubesOS/qubes-issues#3753
* devel-storage-fixes:
storage/file: use proper exception instead of assert
storage/file: import data into temporary volume
storage/lvm: check for LVM LV existence and type when creating ThinPool
storage/lvm: fix size reporting just after creating LV
If revisions_to_keep is 0, it may nevertheless have been > 0 before, so
it makes sense to call _remove_revisions() and hold back none (not all)
of the revisions in this case.
Even when volume is not persistent (like TemplateBasedVM:root), it
should be resizeable. Just the new size, similarly to the volume
content, will be reverted after qube shutdown.
Additionally, when VM is running, volume resize should affect _only_ its
temporary snapshot. This way resize can be properly reverted together
with actual volume changes (which include resize2fs call).
FixesQubesOS/qubes-issues#3519
* bug3164:
tests: add regression test for #3164
storage/lvm: make sure volume cache is refreshed after changes
storage/lvm: fix Volume.verify()
storage/lvm: remove old volume only after successfully cloning new one
In some cases, it may happen that new volume (`self._vid_snap`) does not
exists. This is normally an error, but even in such a case, do not
remove the only remaining instance of volume (`self.vid`). Instead,
rename it temporarily and remove only after new volume is successfully
cloned.
FixesQubesOS/qubes-issues#3164
This applies to qvm-template-postprocess, which at the beginning try to
resize root volume to appropriate size. It makes more sense to silently
succeed here, instead of forcing every client-side utility to check if
the volume have already desired size.
On certain locales (e.g. danish) `usage_percent` will output a comma-separated number, which will make `attr` point the last two decimal points, s.t. `return vol_info['attr'][4] == 'a'` (in the `verify` func) will fail and `qubesd` wont run.
First, cache objects created with init_volume - this is the only place
where we have full volume configuration (including snap_on_start and
save_on_stop properties).
But also implement get_volume method, to get a volume instance for given
volume id. Such volume instance may be incomplete (other attributes are
available only in owning domain configuration), but it will be enough
for basic operations - like cheching and changing its size, cloning
etc.
Listing volumes still use list of physically present volumes.
This makes it possible to start qubesd service, without physical
presence of some storage devices. Starting VMs using such storage would
still fail, of course.
FixesQubesOS/qubes-issues#2960