[Whonix-devel] RFC 6528 revision for better system privacy

bancfc at openmailbox.org bancfc at openmailbox.org
Fri Jan 13 18:41:16 CET 2017

On 2017-01-13 04:21, Steven M. Bellovin wrote:
> On 12 Jan 2017, at 20:49, bancfc at openmailbox.org wrote:
>> Hi Steven and Fernando,
>> I am a Whonix (anonymity OS) dev and would like to discuss the RFC 
>> 6528 [0] you worked on. There has been privacy research in the area of 
>> timer and clock leaks in network protocols that can aid adversaries in 
>> deanonymizing Tor clients and hidden services. There is a practical 
>> attack where an adversary can skew timer measurements by overloading 
>> target machines and affect the oscillation of timer crystals in 
>> predictable patterns that can be remotely measured in TCP sequence 
>> numbers.[1]
>> Please consider revising the RFC to omit the requirement of xoring 
>> timer output with TCP ISNs. Recently the Linux kernel gained the 
>> SipHash PRF to generate better sequence numbers and deprecated MD5. 
>> This further reduces the necessity of including timer input which has 
>> become a side channel that aids traffic correlation and endangers 
>> privacy focused use cases.

Thanks for writing back.

> I'm a bit confused -- there's no requirement in 6528 for XORing a
> timer.  That plus sign in the equation at the start of Section 3
> signifies addition (modulo 2^32, I should mention, since it's a 32-bit
> field), not exclusive-OR.  I'd have to think about it, but I'm not at
> all convinced that XOR would even work in that context.  The result of
> the actually specified operation is that the ISN of a given connection
> is uniformly randomly distributed in [0,2^32-1] and independent of the
> ISN of any other connection.

Dunno could be a Linux implementation thing that deviates from the RFC. 
Could be an older behavior that was changed in newer versions of the 
kernel (that paper is from 2005 after all). The attack is in paper 1. 
I've linked to paper 2 which they referenced in passing as the way they 
extract timer info.

> I'm still reading [2], but I don't see the relevance of the attack to
> Tor. Tor does not provide end-to-end TCP; rather, it provides
> end-to-end transport of the TCP payload, correct?  In that case, the
> sequence numbers of the sender are only visible as far as its Tor
> entrance node, but they're useless for an attack -- anyone who
> observes those sequence numbers already has the source IP address;
> there's no added value, unless the threat is some machine whose source
> IP address is changing rapidly.  But if that's your threat model, you
> can look at the TCP PAWS timer.  Beyond that, though [2] notes that
> subtracting two ISN will give you the timer difference, that's only
> true if you subtract two ISNs for the same connection -- and that's
> very hard in this context.  The source port will change constantly,
> with each new TCP open, and while you can manage the same connection
> when calling a server, with a Tor hidden service the server is
> actually a client at the TCP level, so it picks its own port numbers.

You are correct and this is brought up in the Tor devloper discussion: 
https://trac.torproject.org/projects/tor/ticket/16659 but I am also 
including a passive global adversary in the threat model that also 
collects connections' port numbers, IPs and monitors ISN changes caused 
by collaborating active attackers connecting to a Hidden Service. Paper 
1 doesn't mention this in their threat model buts its very valid today. 
I also wonder if the underlying TCP layer 3 would leak this info 
regardless of how well its mitigated at the Tor protocol in layer 7 and 
so attacker caused CPU stress would become apparent in any non-Tor TCP 
connections from the same machine - say clearnet browsing or apt 
updates. (There is a lot of good discussion in that bug ticket I am 
interested in your feedback on points brought up)

One of the lead devs commented : "adding the 64ns timer post-hash 
probably *does* leak side channels about CPU activity, and that may 
prove very dangerous for long-running cryptographic operations (along 
the lines of the hot-or-not issue)"

Another concern was that the actual clock time can actually be 
reconstructed because of tuple replays in one in every 256 connections 
to a guard relay: 

> I'm unfamiliar with what Linux has done to get away from RFC
> 1948/6528.  Simply replacing MD5 with SipHash does not do away with
> the need for the timer.  The whole purpose of the timer is to preserve
> TCP connection integrity semantics; omitting it changes those
> semantics.  Preserving them was the whole point of 1948, or I'd have
> suggesting simply using a good PRNG for ISNs in my 1989 paper.  Do you
> have a pointer to any documentation describing what's done?

Correct again. I confirmed that changing the hashing algo doesn't fix 


The SipHash patches accepted for 4.11:



SipHash homepage:


> Anyway -- at the moment, I don't see an attack.  As best I can tell,
> [2] describes a way to leak data via the ISN without risk of detection
> (and I'm not even convinced that the threat of detection is real for a
> Tor hidden service, given the client port number issue).  That's not
> the same as being able to fingerprint a timer in a real Tor situation.
>  Even if it was, Tor does not relay TCP headers, so they're not
> visible past the first Tor hop.
> Mind you, I'm not saying you're wrong.  I am saying that you haven't
> persuaded me that you're right or that your suggestion preserves TCP
> semantics for non-Tor situations.
>> [0] https://tools.ietf.org/html/rfc6528
>> [1] http://sec.cs.ucl.ac.uk/users/smurdoch/papers/ccs06hotornot.pdf
>> [2] http://sec.cs.ucl.ac.uk/users/smurdoch/papers/ih05coverttcp.pdf
>         --Steve Bellovin, https://www.cs.columbia.edu/~smb

Thanks again for your time Steve.

More information about the Whonix-devel mailing list