backup: reduce volume size to 100M and limit queue length

This way backup process won't need more than 1GB for temporary files and
also will give more precise progress information. For now it looks like
the slowest element is qrexec, so without such limit, all the data would
be prepared (basically making second copy of it in dom0) while only
first few files would be transfered to the VM.
Also backup progress is calculated based on preparation thread, so when
it finishes there is some other time needed to flush all the data to the
VM. Limiting this amount makes progress somehow more accurate (but still
off by 1GB...).
This commit is contained in:
Marek Marczykowski-Górecki 2013-11-25 00:55:59 +01:00
parent 07ae02915f
commit e31c3ae8e7

View File

@ -1123,7 +1123,7 @@ def backup_do_copy(base_backup_dir, files_to_backup, passphrase,\
progress = blocks_backedup / float(total_backup_sz)
progress_callback(int(round(progress*100,2)))
to_send = Queue()
to_send = Queue(10)
send_proc = Send_Worker(to_send, backup_tmpdir, backup_stdout)
send_proc.start()
@ -1147,7 +1147,7 @@ def backup_do_copy(base_backup_dir, files_to_backup, passphrase,\
# Prefix the path in archive with filename["subdir"] to have it verified during untar
tar_cmdline = ["tar", "-Pc", '--sparse',
"-f", backup_pipe,
'--tape-length', str(1000000),
'--tape-length', str(100000),
'-C', os.path.dirname(filename["path"]),
'--xform', 's:^[a-z]:%s\\0:' % filename["subdir"],
os.path.basename(filename["path"])
@ -1487,7 +1487,7 @@ def restore_vm_dirs (backup_dir, backup_tmpdir, passphrase, vms_dirs, vms,
def progress_callback(data):
pass
to_extract = Queue()
to_extract = Queue()
extract_proc = Extract_Worker(queue=to_extract,
base_dir=backup_tmpdir,
passphrase=passphrase,