This tool by design is called as root, so try to:
- switch to normal user if possible
- fix file permissions afterwards - if not
QubesOS/qubes-issues#2412
There was a comment '# Set later', but actually values were never set.
This break adding just installed template (qvm-template-postprocess).
QubesOS/qubes-issues#2412
When system is installed with LVM thin pool, it should be used by
default. But lets keep file-based on for /var/lib/qubes for some corner
cases, migration etc.
QubesOS/qubes-issues#2412
This is intended to call to finish template installation/removal.
Template RPM package is basically container for root.img, nothing more.
Other parts needs to be generated after root.img extraction. Previously
it was open coded in rpm post-install script, but lets keep it as qvm
tool to ease supporting multiple version in template builder
QubesOS/qubes-issues#2412
VM files may be already removed. Don't fail on this while removing a
VM, it's probably the reason why domain is being removed.
qvm-remove tool have its own guard for this, but it isn't enough - if
rmtree(dir_path) fails, storage.remove() would not be called, so
non-file storages would not be cleaned up.
This is also needed to correctly handle template reinstallation - where
VM directory is moved away to call create_on_disk again.
QubesOS/qubes-issues#2412
'-' is invalid character in python identifier, so all the properties
have '_'. But in previous versions qvm-* tools accepted names with '-',
so lets not break this.
QubesOS/qubes-issues#2412
/var/log/qubes directory have setgid set, so all the files will be owned
by qubes group (that's ok), but there is no enforcement of creating it
group writable, which undermine group ownership (logs created by root
would not be writable by normal user)
QubesOS/qubes-issues#2412
In case of LVM (at least), "internal" flag is initialized only when
listing volume attached to given VM, but not when listing them from the
pool. This looks like a limitation (bug?) of pool driver, it looks like
much nicer fix is to handle the flag in qvm-block tool (which list VMs
volumes anyway), than in LVM storage pool driver (which would need to
keep second copy of volumes list - just like file driver).
QubesOS/qubes-issues#2256
There are mutiple cases when snapshots are inconsistently created, for
example:
- "-back" snapshot created from the "new" data, instead of old one
- "-snap" created even when volume.snap_on_start=False
- probably more
Fix this by following volume.snap_on_start and volume.save_on_stop
directly, instead of using abstraction of old volume types.
QubesOS/qubes-issues#2256
Just calling pool.init_volume isn't enough - a lot of code depends on
additional data loaded into vm.storage object. Provide a convenient
wrapper for this.
At the same time, fix loading extra volumes from qubes.xml - don't fail
on volume not mentioned in initial vm.volume_config.
QubesOS/qubes-issues#2256
- add missing lvm remove call when commiting changes
- delay creating volatile image until domain startup (it will be created
then anyway)
- reset cache only when really changed anything
- attach VM to the volume (snapshot) created for its runtime - to not
expose changes (for example in root volume) to child VMs until
shutdown
QubesOS/qubes-issues#2412QubesOS/qubes-issues#2256
The wrapper doesn't do anything else than translating command
parameters, but it's load time is significant (because of python imports
mostly). Since we can't use python lvm API from non-root user anyway,
lets drop the wrapper and call `lvm` directly (or through sudo when
necessary).
This makes VM startup much faster - storage preparation is down from
over 10s to about 3s.
QubesOS/qubes-issues#2256
...instead of manual copy in python. DD is much faster and when used
with `conv=sparse` it will correctly preserve sparse image.
QubesOS/qubes-issues#2256
Set parameters for possibly hiding domain's real IP before attaching
network to it, otherwise we'll have race condition with vif-route-qubes
script.
QubesOS/qubes-issues#1143
This is the IP known to the domain itself and downstream domains. It may
be a different one than seen be its upstream domain.
Related to QubesOS/qubes-issues#1143`
This helps hiding VM IP for anonymous VMs (Whonix) even when some
application leak it. VM will know only some fake IP, which should be set
to something as common as possible.
The feature is mostly implemented at (Proxy)VM side using NAT in
separate network namespace. Core here is only passing arguments to it.
It is designed the way that multiple VMs can use the same IP and still
do not interfere with each other. Even more: it is possible to address
each of them (using their "native" IP), even when multiple of them share
the same "fake" IP.
Original approach (marmarek/old-qubes-core-admin#2) used network script
arguments by appending them to script name, but libxl in Xen >= 4.6
fixed that side effect and it isn't possible anymore. So use QubesDB
instead.
From user POV, this adds 3 "features":
- net/fake-ip - IP address visible in the VM
- net/fake-gateway - default gateway in the VM
- net/fake-netmask - network mask
The feature is enabled if net/fake-ip is set (to some IP address) and is
different than VM native IP. All of those "features" can be set on
template, to affect all of VMs.
Firewall rules etc in (Proxy)VM should still be applied to VM "native"
IP.
FixesQubesOS/qubes-issues#1143
Core3 keep information whether property have default value for all the
properties (not only few like netvm or kernel). Try to use this feature
as much as possible.
When user included/excluded some VMs for restoration, it may be
neceesarry to fix dependencies between them (for example when default
template is no longer going to be restored).
Also fix handling conflicting names.
Now, when file name is also integrity protected (prefixed to the
passphrase), we can make sure that input files are given in the same
order. And are parts of the same VM.
QubesOS/qubes-issues#971
This prevent switching parts of backup of the same VM between different
backups made by the same user (or actually: with the same passphrase).
QubesOS/qubes-issues#971
`openssl dgst` and `openssl enc` used previously poorly handle key
stretching - in case of `openssl enc` encryption key is derived using
single MD5 iteration, without even any salt. This hardly prevent
brute force or even rainbow tables attacks. To make things worse, the
same key is used for encryption and integrity protection which ease
brute force even further.
All this is still about brute force attacks, so when using long, high
entropy passphrase, it should be still relatively safe. But lets do
better.
According to discussion in QubesOS/qubes-issues#971, scrypt algorithm is
a good choice for key stretching (it isn't the best of all existing, but
a good one and widely adopted). At the same time, lets switch away from
`openssl` tool, as it is very limited and apparently not designed for
production use. Use `scrypt` tool, which is very simple and does exactly
what we need - encrypt the data and integrity protect it. Its archive
format have own (simple) header with data required by the `scrypt`
algorithm, including salt. Internally data is encrypted with AES256-CTR
and integrity protected with HMAC-SHA256. For details see:
https://github.com/tarsnap/scrypt/blob/master/FORMAT
This means change of backup format. Mainly:
1. HMAC is stored in scrypt header, so don't use separate file for it.
Instead have data in files with `.enc` extension.
2. For compatibility leave `backup-header` and `backup-header.hmac`. But
`backup-header.hmac` is really scrypt-encrypted version of `backup-header`.
3. For each file, prepend its identifier to the passphrase, to
authenticate filename itself too. Having this we can guard against
reordering archive files within a single backup and across backups. This
identifier is built as:
backup ID (from backup-header)!filename!
For backup-header itself, there is no backup ID (just 'backup-header!').
FixesQubesOS/qubes-issues#971
Have a generic function `handle_streams`, instead of
`wait_backup_feedback` with open coded process names and manual
iteration over them.
No functional change, besides minor logging change.
Use just introduced tar writer to archive content of LVM volumes (or
more generally: block devices). Place them as 'private.img' and
'root.img' files in the backup - just like in old format. This require
support for replacing file name in tar header - another thing trivially
supported with tar writer.
tar can't write archive with _contents_ of block device. We need this to
backup LVM-based disk images. To avoid dumping image to a file first,
create a simple tar archiver just for this purpose.
Python is not the fastest possible technology, it's 3 times slower than
equivalent written in C. But it's much easier to read, much less
error-prone, and still process 1GB image under 1s (CPU time, leaving
along actual disk reads). So, it's acceptable.
Old backup metadata (old qubes.xml) does not contain info about
individual volume sizes. So, extract it from tar header (using verbose
output during restore) and resize volume accordingly.
Without this, restoring volumes larger than default would be impossible.
To ease all this, rework restore workflow: first create QubesVM objects,
and all their files (as for fresh VM), then override them with data
from backup - possibly redirecting some files to new location. This
allows generic code to create LVM volumes and then only restore its
content.
1. Add a helper function on vm.storage. This is equivalent of:
vm.storage.get_pool(vm.volumes[name]).export(vm.volumes[name])
2. Make sure the path returned by `export` on LVM volume is accessible.
First part - handling firewall.xml and rules formatting.
Specification on https://qubes-os.org/doc/vm-interface/
TODO (for dom0):
- plug into QubesVM object
- expose rules in QubesDB (including reloading)
- drop old functions (vm.get_firewall_conf etc)
QubesOS/qubes-issues#1815