Fifth part, conclusions, random gotchas and todos

My setup can certainly be improved given I can find some spare time to spend but at the moment I am overloaded with work and will use it as it is. Naturally the complexity and elevated costs can be a showstopper for an average user but for anyone above average or with the problems similar to those I have, this is the simple and cheap way to solve them.

The proxies setup has a flaw in the concept that affects the latency of the connections. Squids don’t maintain the connection open between them outside of the client’s scope, which means that every time your browser opens connection, it actually opens three connections. I couldn’t find any way to keep the connections between proxies open and pipeline the requests from browsers maintaining one permanent session. This can be theoretically solved by setting up some TCP wrapper, like xinetd, on local node that will forward the connections to the remote squid using MPTCP but the part I miss is authorization. Without a static IP you have to resort to username/password authentication on the parent and I’d prefer not to configure proxy on every client I have on the LAN. This can also be solved with some cumbersome solutions like updating parent squid or iptables on IP change a-la dynamic DNS, but again, it is cumbersome and I am content with what I have at the moment.

On the other hand, there are ways to reduce this latency in the newer kernels, like these proposed by google more than a year ago, that were mostly implemented and accepted in the newer kernel versions and I’ll be eager to try MPTCP coupled with TCP Fast Open some day. For the time being I can only use initcwnd/initrwnd optimization on the 3.5.7 kernel:

#local:
default  initcwnd 10 initrwnd 10
nexthop via 192.168.230.1  dev eth0 weight 1
nexthop via 192.168.232.1  dev eth0 weight 1
#remote:
default via x.x.x.1 dev eth0  initcwnd 10 initrwnd 10

which I can’t really say gives an advantage in a page loading.

roadworksWhen it comes to bonding, I have a todo related to the link state detection. Bonding is designed to operate on 2nd layer, so it has no way to detect that there is a problem on the route. Additionally, openvpn doesn’t report the loss of connectivity with the other side. What you get as a result is that the bond driver continues to use a non functional interface forever. This practically results in a slowed down connection until you either manually remove the failed interface from the bond or the link recovers. I should write a watchdog at some point I did it with only a script hooked to openvpn events. MPTCP in turn, can sense it well enough to tear down non working streams and re-add them when needed. Can’t say how long it takes. but practically you can’t even notice it. When in browser, and not using HTTPS, this comes as a seamless experience.

There is a small gotcha for the MPTCP usage in my setup. MPTCP by default will use all visible IP addresses on a node to try and reach another MPTCP capable node, which means you have to disable it on the local links and anything that is not internet connectivity. A modified ip tool from the MPTCP project may be used to do just that:

ip link set dev eth0 multipath off

However, it will disable MPTCP on the interface and not on a route so if you resort to alias IPs as I do now, it is useless. I am already waiting for a mini ITX VIA board with multiple NICs to arrive (after trying the cheap way) but the temporary solution is to use iptables:

-I OUTPUT -s "non internet local IP" -d "MPTCP capable remote" -j REJECT

to stop streams from being created over non-internet links.

Also, depending on the specifics, rules like these may be helpful:

ip rule add from 192.168.0.0/16 to 192.168.0.0/16 lookup main priority 10
ip rule add from 192.168.0.0/16 to 10.0.0.0/8 lookup main priority 11

to tell the local node to stop the local traffic from being sent to the remote node

Oh, probably the biggest todo can only be vaguely defined as “try it all with IPv6”. I have IPv6 deployed on the LAN through a HE tunnel right now and I need to try to get it all together somehow on a rainy day.

adventure

I’m going on an adventure!

Another annoying problem is that using a “server IP” may create some interesting effects when browsing the internets. First, some sites block “server IPs”, like Whirlpool Forums and some others I can’t remember. Also, if your remote is in another country, it can result in weird experience when it comes to the websites that use IP based geolocation to deliver the content. Like Google. I’ve started my experiments using a server in French OVH network, with an IP that was registered as an Italian in RIPE. While browsing with it, various sites offered me Italian, French and Polish content, all mixed on the same page depending on who was the content provider. These side effects forced me to move my remote to an Italian IP in a smaller data center.

The last one popped up another interesting detail. The Italian server is behind an old Netscreen Firewall which is configured in bridge mode and does some kind of DPI on the traffic. When I get my 16Mbps through it with MPTCP, the CPU load on the firewall gets to 90%. I guess this should be called “options confusion”, but hey, at least it let the traffic pass and the firewall owner didn’t kick me out yet.

piratebay_200x214An advice to bittorrent users. Your best bet is to install your torrent client, transmission-daemon or whatever it is, right on the local node and make it use the normal multi-path routing and not the bonded link or proxy. This way you can saturate both lines without affecting your server traffic which may be capped.

wildlife tracking. science!

wildlife tracking. science!

Another gotcha is related to UDP based VPNs or rather to NATing UDP in general. Most probably your CPEs will be affected by “sticky” connection tracking problem. When a router tries to track a UDP connection in a NAT table it will set an expiration timer on it. A part of the record is the external IP used to rewrite the source IP of the packet. What happens when connection drops and the external IP is changed is that the entry remains there and the external IP doesn’t change. What happens next is that the NATed client continues to send UDP packets with the same source and destination IPs/ports and the entry will never expire given the packets arrive at interval below the UDP connection tracking timeout (30 seconds default on Linux). But the external IP has changed and the packets will never be sent out with the new one. What seems to work for me is randomizing local port of the openvpn client:

lport 0

Well, that’s all for now I suppose. Been a pleasure.

Post Navigation