浏览代码

Fix iptables-restore race condition in vif-route-qubes

In rare cases when vif-route-qubes is called simultaneously with some
other iptables-restore instance, it fails because of missing --wait (and
recent iptables-restore defaults to aborting instead of waiting
for lock). That other call may be from qubes-firewall or user script.

Related to QubesOS/qubes-issues#3665
Marek Marczykowski-Górecki 5 年之前
父节点
当前提交
336754426b
共有 1 个文件被更改,包括 8 次插入3 次删除
  1. 8 3
      network/vif-route-qubes

+ 8 - 3
network/vif-route-qubes

@@ -25,7 +25,12 @@ dir=$(dirname "$0")
 . "$dir/vif-common.sh"
 
 #main_ip=$(dom0_ip)
-lockfile=/var/run/xen-hotplug/vif-lock
+
+ipt_arg=
+if "iptables-restore" --help 2>&1 | grep -q wait=; then
+    # 'wait' must be last on command line if secs not specified
+    ipt_arg=--wait
+fi
 
 # shellcheck disable=SC2154
 if [ "${ip}" ]; then
@@ -101,12 +106,12 @@ if [ "${ip}" ] ; then
             ipt=iptables-restore
         fi
         echo -e "*raw\n$iptables_cmd -i ${vif} ! -s ${addr} -j DROP\nCOMMIT" | \
-            ${cmdprefix} flock $lockfile $ipt --noflush
+            ${cmdprefix} $ipt --noflush $ipt_arg
 	done
     # if no IPv6 is assigned, block all IPv6 traffic on that interface
     if ! [[ "$ip" = *:* ]]; then
         echo -e "*raw\n$iptables_cmd -i ${vif} -j DROP\nCOMMIT" | \
-            ${cmdprefix} flock $lockfile ip6tables-restore --noflush
+            ${cmdprefix} ip6tables-restore --noflush $ipt_arg
     fi
     ${cmdprefix} ip addr "${ipcmd}" "${back_ip}/32" dev "${vif}"
     if [ "${back_ip6}" ] && [[ "${back_ip6}" != "fe80:"* ]]; then