Apparently kernel patch "x86/cpa: Use pte_attrs instead of pte_flags on
CPA/set_p.._wb/wc operations" (in out repo) doesn't fully solve the
problem and sometimes qubes-gui agent crashes with message like
"qubes-gui:664 map pfn expected mapping type write-back for [mem
0x00093000-0x00093fff], got uncached-minus".
Because PAT we really need only in dom0 (lack of it dramatically
decrease performance of some graphics drivers), we can simply disable it
in VM - as it is currently done in upstream kernel.
The backup_cancel() method kills processes registered by main thread and
set "running_backup_operation.canceled" to True. Then main thread get an
error because of killed processes and check if that was because of
cancel request.
Introduce BackupCanceledError, which can report temporary dir to remove.
Do wait for nest reported filename only when restoring directly from
dom0. In VM case it isn't necessary and will cause false error report
(because filename will be set to nextfile at the end of restore process,
so will be treated as spurious file without hmac).
Simply get device major-minor from /dev/ device file.
This is only partial solution, because this will work only for dom0
devices, but the same problem can apply to VM.
It stores basic backup information like used hmac/crypto algorithm,
whether backup is encrypted/compressed and possibly more. The header
file is parsed only after successful HMAC verification. Because we do
not know which HMAC algorithm was used before reading the header, try to
guess trying all supported (starting with the default one).
Backup header is stored as the first file, which is always not encrypted
and not compressed. Then qubes.xml follows.
Call backup_restore_header from backup_restore_prepare, there is no
sense in requiring the user to call them separately. Also store all
parameters in restore_info object as special '$OPTIONS$' VM to not
require passing them twice (with all the chances for the errors).
This can be any application, for example Qubes Manager. Changing current
dir can have side effects, especially when we do not change it back
after restore (or in any error encountered).
Save the next one in temporary file, then move over to destination file.
This way when writing the file to disk fails (e.g. out of disk space),
user still have old file version intact.
This is somehow related to #757, but only first (easier) step. Actual
change of QubesAdminVm base class requires somehow more changes, for
example qvm-ls needs to know how to display this type of VM (none of
template, appvm, netvm).
Make this first step change now, because starting with R2Beta3 dom0 will
be stored in qubes.xml (for new backups purposes) so this rename would
be complicated later.
Template is saved as single archive of the whole VM directory. Preserve
backup directory structure regardless of its content - in this case it
means we need "." archive (with template directory content) placed in
"vm-tempates/<template-name>/" backup directory. This allows restore
process to select right files to restore regardless of VM type.
Also some major cleanups: Reduce some more code duplication
(verify_hmac, simplify backup_restore_prepare). Rename
backup_dir/backup_tmpdir variables to better match its purpose. Rename
backup_do_copy back to backup_do. Require QubesVm object (instead of VM
name) as appvm param.
This way backup process won't need more than 1GB for temporary files and
also will give more precise progress information. For now it looks like
the slowest element is qrexec, so without such limit, all the data would
be prepared (basically making second copy of it in dom0) while only
first few files would be transfered to the VM.
Also backup progress is calculated based on preparation thread, so when
it finishes there is some other time needed to flush all the data to the
VM. Limiting this amount makes progress somehow more accurate (but still
off by 1GB...).
We can't wait for tar next volume prompt using stderr.readline(),
because tar don't output EOL marker after this prompt. The other way
would be switching file descriptor to non-blocking mode and using lower
level os.read(), but this looks like more error-prone way (races...).
So change idea of handling such archives: after switching to next
archive volume, simply send '\n' to tar (which will receive when
needed). When getting "*.000" file, assume that previous archive was
over and wait for previous tar process. Then start the new one.
Also don't give explicit tape length, only turn multi-volume mode on. So
will correctly handle all multi-volume archives, regardless of its size.
This is mostly revert of "3d1b40f backups: keep file without path in
inner tar archive" in terms of archive format, but the code is more
robust than old one. Especially reuse already computed dir paths. Also
restore only requested files (based on selected VMs and its qubes.xml
data). Change the restore workflow to restore files first to temporary
directory, then move to final dirs. This approach:
- will be compatible with hashed vm name in the archive path
- is required to handle dom0 home backup (directory outside of
/var/lib/qubes)
- it should be also more defensive - make any changes in /var/lib/qubes
only after successful extraction of files and creating Qubes*Vm object
Second change in this commit is implement of dom0 home backup/restore.
As qubes.xml now contains data about dom0, we have information whether
it is included in the backup (before getting actual files).
Ensure that outer tar/encryptor gets all the data *and EOF* before
signalling inner tar to continue. Previously it could happen that inner
tar begins to write next data chunk, while qvm-backup still holds
previous data chunk open.
It is senseless to have full file path in multiple locations:
- external archive
- qubes.xml
- internal archive
Also it is more logical to have only "private.img" file in archive
placed in "appvms/untrusted/private.img.000". Although this is rather
cosmetic change for VMs data, it is required to backup arbitrary
directory, like dom0 user home.
Also use os.path.* instead of manual string operations (split,
partition). It is more foolproof.
Any HVM (which isn't already template-based) can be a template for
another HVM. For now do not allow simultaneous run of template and its
VM (this assumption simplify the implementation, as no root-cow.img is
needed).
Already processed in backup prepare phase). This is only because
qfile-dom0-unpacker doesn't support selective unpack (like tar do).
This should be extended to skip also VMs not selected for restore.
This was already partially implemented, but only for backup header
(qubes.xml).
Fix handling of vmproc object (available only when backup in another
VM).
Also fix some race conditions - wait for process termination, not only
check its exit code (which would be None if process still running).
Gui daemon isn't aware of multihead parameters, also gui protocol
doesn't support such information - currently by design it is configured
via Qubes RPC service.
At GUI startup send monitor layout to the VM.