Main reason is to remove code duplication.
Also fixes#260 and workaround (by sleep...) some race at NetVM restart
(fronted driver does not noticed vif-detach+vif-attach).
The source of problem was clockevent_program_event returns -ETIME:
------------ kernel/time/clockevents.c:
/**
* clockevents_program_event - Reprogram the clock event device.
* @expires: absolute expiry time (monotonic clock)
*
* Returns 0 on success, -ETIME when the event is in the past.
*/
int clockevents_program_event(struct clock_event_device *dev, ktime_t
expires,
ktime_t now)
-------------
xen_vcpuop_set_next_event schedules event by getting current time
(xen_clocksource_read()) (*1) adding delta (expires-now) and programming
event with VCPUOP_set_singleshot_timer hypercall. Then xen gets current
time (*2) and in some rare cases this time is after expected timer
expiration... Even after VCPUOP_set_singleshot_timer hypercal,
xen_clocksource_read() reports time slightly in the past comparing to
xen time (reported by NOW() macro).
I think this is because "current" time is calculated different way in *1
and *2. The *1 way is controlled by tsc_mode, which is described here:
http://lxr.xensource.com/lxr/source/docs/misc/tscmode.txt. Default
tsc_mode=0 is "smart" and I think because of that can be slightly before
NOW() time. tsc_mode=2 is almost the same as NOW() macro works.
After all tsc_mode=2 was default in xen-3.4.
This reverts commit 94c0f6c9d3.
Kpackagekit is not so nice-behaving as gpk-update-viewer is,
e.g. it complains there are is no network connectivity, and, perhaps
as a result, doesn't display the list of avilable updates.
qubes.py now places rules for each domain in a separate key under
/local/domain/fw_XID/qubes_iptables_domainrules/
plus the header in /local/domain/fw_XID/qubes_iptables_header.
/local/domain/fw_XID/qubes_iptables is now just a trigger.
So, if iptables-restore fails dues to e.g. error resolving a domain name
in a rules for a domain, then only this domain will not get connectivity,
others will work fine.
There are indications that when parent "xl" process exits, the domain is not
booted completely; and xl actions may interfere with qmemman memory balancing.
Thus, in VM.start(), we delay releasing of qmemman handle until qrexec_daemon
connects successfully.
In fact, set to ALL_PHYS_MEM (and the same for other domains that do not
have static-max key, although there should not be any). Previous method
of using maxmem_kb was broken, as qmemman sets maxmem_kb to the memory target
(which I do not like btw).