Persistent Tor Entry Guards Introduction

From Whonix

What are Tor Entry Guards? If this is an unfamiliar term, please press on Expand on the right.

Current practical, low-latency, anonymity designs like Tor fail when the attacker can see both ends of the communication channel. For example, suppose the attacker controls or watches the Tor relay a user chooses to enter the network, and also controls or watches the website visited. In this case, the research community is unaware of any practical, low-latency design that can reliably prevent the attacker from correlating volume and timing information on both ends.

Mitigating this threat requires consideration of the Tor network topology. Suppose the attacker controls, or can observe, C relays from a pool of N total relays. If a user selects a new entry and exit relay each time the Tor network is used, the attacker can correlate all traffic sent with a probability of (c/n)2. For most users, profiling is as hazardous as being traced all the time. Simply put, users want to repeat activities without the attacker noticing, but being noticed once by the attacker is as detrimental as being noticed more frequently. [...]

The solution to this problem is "entry guards". Each Tor client selects a few relays at random to use as entry points, and only uses those relays for the first hop. If those relays are not controlled or observed, the attacker can't use end-to-end techniques and the user is secure. If those relays are observed or controlled by the attacker, then they see a larger fraction of the user's traffic — but still the user is no more profiled than before. Thus, entry guards increase the user's chance of avoiding profiling (on the order of (n-c)/n), compared to the former case.

You can read more at An Analysis of the Degradation of Anonymous Protocols [archive], Defending Anonymous Communication Against Passive Logging Attacks [archive], and especially Locating Hidden Servers [archive].

Restricting entry nodes may also help to defend against attackers who want to run a few Tor nodes and easily enumerate all of the Tor user IP addresses. [1] However, this feature won't become really useful until Tor moves to a "directory guard" design as well.

Source and License, see footnote: [2]

Many well known enhanced anonymity designs such as Tor, Whonix and the Tor Browser Bundle (TBB) use persistent Tor guards. This decision is attributable to community-based research which demonstrates that persistent Tor entry guards benefit security and lower the probability of an adversary profiling a user.[3]

Info Note: Guard fingerprinting techniques are similar to methods that track users via MAC addresses. If this is a realistic threat, then MAC address randomization is also recommended.

In general, users should not interfere with Tor guard persistence or the natural rotation of entry guards every few months. At the time of writing, the Tor client selects one guard node, but previously used a three-guard design. Guards have a primary lifetime of 120 days.[4] [5] [6]

Ambox warning pn.svg.png Warning: In some situations it is safer to not use the usual guard relay!

Guard Fingerprinting

While natural guard rotation is recommended, there are some corner cases in which an adversary could fingerprint the entry guards [7] and de-anonymize a user. For instance:

  • The same entry guards are used across various physical locations and access points.
  • The same entry guards are used after permanently moving to a different physical location.

For details on how this is possible, press Expand on the right.

Consider the following scenario. A user connects to Tor via a laptop at their home address. An advanced adversary has observed the client-to-guard-node network path to discover the user's entry point to the Tor network.

Soon afterwards, the same user attends a prominent event or protest in a nearby city. At that location, the user decides to anonymously blog about what transpired using the same laptop. This is problematic for anonymity, as the Tor client is using the same entry guard normally correlated with the user’s home address.

To understand the potential threat, consider the following:

  • There are only around 3,000 Tor guards in 2019. [8]
  • By design, Tor is picking one primary guard and using it for a few months. Since the Tor user base is relatively small, it is possible that a guard might only be used by one person in an entire region.
  • As the IP address of Tor entry guards is static and Tor network traffic is easily distinguishable, this information becomes public knowledge.
  • It is feasible that if a user-guard relationship is unique in a city location, and that user moves, it is likely (but not certain) that there was a location change.
  • At the event, the user might be the only one using Tor (or among a handful).
  • If the user posts about the event and an adversary who is passively monitoring network traffic conducts the same successful observation of the client-to-guard network path, then it’s likely the "anonymous" posts will be linked with the same person who normally connects to that guard at home.

The relative uncommonness of Tor usage simply exacerbates the potential for de-anonymization.

There are several ways to mitigate the risk of guard fingerprinting across different physical locations. In most cases, the original entry guards can also be re-established after returning home:

Forum discussion: [archive]

  1. Even though the attacker can't discover the user's destinations in the network, they still might target a list of known Tor users.
  2. Source: What are Entry Guards? [archive] (w [archive])
    license [archive] (w [archive]):
    Content on this site is Copyright The Tor Project, Inc.. Reproduction of content is permitted under a Creative Commons Attribution 3.0 United States License [archive] (w [archive]). All use under such license must be accompanied by a clear and prominent attribution that identifies The Tor Project, Inc. as the owner and originator of such content. The Tor Project Inc. reserves the right to change licenses and permissions at any time in its sole discretion.
  3. The risk of guard fingerprinting is less severe now that upstream (The Tor Project) has changed its guard parameters to decrease the de-anonymization risk.
  4. Prop 291 indicates a 3.5 month guard rotation.
  5. The Tor Project is currently considering shifting to two guards per client for better anonymity, instead of having one primary guard in use.
  6. [archive]
  7. The entropy associated with one, two or three guards [archive] is 9, 17 and 25 bits, respectively.
  8. [archive]