The source of problem was clockevent_program_event returns -ETIME:
------------ kernel/time/clockevents.c:
/**
* clockevents_program_event - Reprogram the clock event device.
* @expires: absolute expiry time (monotonic clock)
*
* Returns 0 on success, -ETIME when the event is in the past.
*/
int clockevents_program_event(struct clock_event_device *dev, ktime_t
expires,
ktime_t now)
-------------
xen_vcpuop_set_next_event schedules event by getting current time
(xen_clocksource_read()) (*1) adding delta (expires-now) and programming
event with VCPUOP_set_singleshot_timer hypercall. Then xen gets current
time (*2) and in some rare cases this time is after expected timer
expiration... Even after VCPUOP_set_singleshot_timer hypercal,
xen_clocksource_read() reports time slightly in the past comparing to
xen time (reported by NOW() macro).
I think this is because "current" time is calculated different way in *1
and *2. The *1 way is controlled by tsc_mode, which is described here:
http://lxr.xensource.com/lxr/source/docs/misc/tscmode.txt. Default
tsc_mode=0 is "smart" and I think because of that can be slightly before
NOW() time. tsc_mode=2 is almost the same as NOW() macro works.
After all tsc_mode=2 was default in xen-3.4.
It is build upon qrexec2, qubes.VMShell command. So, in order to e.g.
start firefox in a fresh dispVM, do
qvm-run '$dispvm' firefox http://www.qubes-os.org
This is especially useful for proxy VMs that e.g. run some transparent proxy service such as tor,
and need to rebind it upon IP change (of course this assumes iptables-based transparent redirection
such as DNAT).
Apparently vif frontend has broken sg implementation; we already worked around
it in init.d script via ethtool; now do the same in setup_ip. It is relevant
when attaching firewallvm to a different netvm on the fly.
This reverts commit 94c0f6c9d3.
Kpackagekit is not so nice-behaving as gpk-update-viewer is,
e.g. it complains there are is no network connectivity, and, perhaps
as a result, doesn't display the list of avilable updates.
qubes.py now places rules for each domain in a separate key under
/local/domain/fw_XID/qubes_iptables_domainrules/
plus the header in /local/domain/fw_XID/qubes_iptables_header.
/local/domain/fw_XID/qubes_iptables is now just a trigger.
So, if iptables-restore fails dues to e.g. error resolving a domain name
in a rules for a domain, then only this domain will not get connectivity,
others will work fine.