Whonix ™ Threat Model

From Whonix
< Dev
Jump to navigation Jump to search

Technical Design Documentation about Whonix ™ Threat Model. Attacker Capabilities. Goals. Attack Surface.

Threat Model[edit]


The key words "MUST", "MUST NOT", "REQUIRED", "SHALL", "SHALL NOT", "SHOULD", "SHOULD NOT", "RECOMMENDED", "MAY", and "OPTIONAL" in this document are to be interpreted as described in RFC 2119archive.org.

The Threats[edit]

First let's quickly consider all the possible threats we could face. We can put threats against computer systems into five categories:

  1. Non-technical/non-computer-based and legal attacks E.g. "rubber hose cryptanalysis", mandatory key disclosure, subpoena, real detective work/espionage (done by real humans, not bots), bugs, surveillance cameras...
  2. Hardware attacks through physical access. RAM and Hard disk Forensics, keylogger, TEMPEST, Evil Maid attack...
  3. Backdoors Intentional weaknesses or additional hidden "functionality" put into your system by a manufacturer, vendor or distributor (hardware or software) allowing remote access1 Can be put in anywhere between design/source code, manufacturing/compilation or distribution.
  4. Vulnerabilities Coding or hardware design errors that can be exploited to gain remote access1 Vulnerabilities and Backdoors in practice can look exactly the same, only the way they are created differ.
  5. Design flaws Even in a backdoors-free, error-free system with perfect physical and legal protection (no such thing in reality) perfect security may still be impossible. This is especially true for anonymity systems. While with cryptography a perfect design exists (the one time pad) anonymity is not binary. You are not either anonymous or not, you only have some non-zero but not-perfect degree of anonymity. Your anonymity depends on hiding amongst others. Anonymity networks require trust and a large and diverse userbase. A perfect security system wouldn't require trusting other parties or depending on such extraneous factors. Any design of a complex (but still usable) system (all computer systems are complex) needs to make tradeoffs. That's why pretty much everything uses non-perfect cryptography instead of the one-time-pad...

Bug Research: IP discovery through Tor behind isolated networkarchive.org seems to only be about learning the IP address which a Tor client connects to the Internet through. It doesn't mention other information leaks.


1 Used in the broadest sense, can be classic remote code execution, privilege escalation, DoS... Essentially they all allow remote parties to send your system instructions that diverge from the allowed security policy

Attacker capabilities[edit]

  • Legal/Non-technical "attacks" and physical access is expensive Anything that requires real humans to do work instead of letting computers and algorithms do it is prohibitively expensive against more than a small target group.
  • Backdoors (hardware or software) are expensive It is hard to make one and it is risky. The more you use it the more likely it is it will be caught and closed. Adversaries are not going to waste it on anything but high profile targets. However, well hidden passive "backdoors" are of a concern, they can practically be impossible to detect and remain functional for years. Imagine a backdoor in a compiler affecting the PRNG. The cost for such backdoor is high (for this example an attacker need patience, non-trivial coding and social engineering as well as some crypto skills) but the "ROI" would be great too...
  • Software attacks are cheap A good 0day can still cost a lot and is not useful for very long once you start using it. But new vulnerabilities are found all the time, new exploits can be written. Compared to physical access and backdoors it is cheap. It can be fully automated against a large target set. If you are wondering how realistic or important the client application 0day threat is consider pwn2own 2012. A semi-legal but most certainly immoral and reckless company demonstrated how they can exploit every web browser circumventing all state of the art security features (MAC, Sandbox, ASRL, NX, etc). This is not too surprising, we already established how big code bases always have bugs and a subset of those bugs will always be exploitable. The noteworthy news here is that this company doesn't report their findings to the developers, they sell it to governments (and/or the highest bidder) as offensive "cyber weapons" (naturally not their term). It is only likely that also Tor vulnerabilities are sold behind closed doors. You MUST act accordingly and never rely on Tor alone if your threat model demands "reasonably strong anonymity" even from not-nearly-global but well funded adversaries.
  • Crypto-attacks are enormously expensive If you know how to break currently used crypto you will try keep this a secret at all costs. Most crypto attacks will most likely require enormous computing power.
  • Attacking design flaws: it depends... The costs can vary greatly but in every case, once you attack a design flaw and it becomes publicly known the target will fix the flaw and will become harder to attack in the future. This can result in an arms race and often neither party wants to go that way. A grave problem for the defender side can be that some systems are so widely used that it becomes almost impossible to mitigate away to a more secure design. The best example for that is the certificate authority system. Its design is flawed on so many levels yet it is relied upon, often solely for all authentication and confidentiality, to this day by pretty much everyone who's ever used a computer. Similarly, a lack of crypto and hash agility in many applications results in a dependence on weak and broken algorithms, not because we don't know any better but because backwards compatibility and some bad apples spoil it for everyone. There's certainly a threshold, were, if an attacker plays it slow and carefully, he can stay just below the radar for a long time and let everyone believe that it is not "that bad yet". For example, a collision attackarchive.org against SHA-1 (which would affect software updates, version control systems as well as Tor, GPG, TLS and pretty much everything that uses public key crypto) is known to be "within the reach of a powerful adversary" since 2005, with Moore's Law and Schneier's Law ("Attacks always get better; they never get worse") in play the range of potential adversaries only got larger. SHA-1 will probably be relied on pretty universally until someone publicly demonstrates a collision. Not because we don't know any better, but because to cost of switching are so high.

Our Goals[edit]

  • Hiding our Location and Identity when transmitting information
  • Confidentiality, Integrity, Authenticity and Availability of transmitted data

Our Attack Surface[edit]

Basically any part used by us to achieve our goals can be attacked:

  • We REQUIRE physical security and privacy. It is obvious that anonymity and confidentiality is difficult or impossible to achieve otherwise.
  • Tor, the code, the design, the whole network (see Dev/Anonymity Network for more) We also can't look at Tor as an isolated piece of software, we SHOULD consider its whole TCB because if any part of a TCB can be subverted the Tor process can be subverted. The attack surface in case of backdoors is the full TCB, in case of vulnerabilities "only" the network facing parts of the TCB.
    • The Whonix-Gateway ™ Hardware
    • GNU/Linux/OS: kernel, essential userland, APT and all software update packages provided by the OS distributor/vendor
    • Tor itself
    • The build system (gcc...) Using Virtualization (instead of Physical Isolation) increases the TCB significantly. In addition to the above:
    • Host Kernel
    • Host applications
    • Host software updater
    • Virtualization software
  • We REQUIRE software we use doesn't do more than absolutely necessary to reduce our anonymity set through protocol leaks, identity correlation through Tor circuit sharing and fingerprinting. For that we also have to rely on the Whonix-Workstation ™ and the TCB of the applications we use. (for more see [Whonix ™ Protocol-Leak-Protection and Fingerprinting-Protection]).

For Confidentiality, Integrity and Authenticity we (we all, not just Whonix ™...) are still way too heavily dependent on TLS (Transport Layer Security, the "s" in https). Further we depend on:

  • The complete Host (Hardware and Software)
  • all software in Whonix-Workstation ™
  • TLS/PKI infrastructure
  • In case of onion services: The full location/identity TCB.
  • The "other end", the security of whoever you are communicating with, unless it is symmetric key crypto and you got the only key.

To achieve Availability of information we depend on:

  • the upstream internet provider, uptime and connectivity
  • Tor network resilience against DoS
  • resilience of webservers you connect to against DoS
  • Your own system's resilience against DoS

Applying the threat model to Whonix ™[edit]

First of all, you don't want to end up in the "high profile target" list of any of your resourceful adversaries. Because, if you do, neither Whonix ™ nor any other computer based system can protect you.

Whonix ™ is mostly a software project. There is not a lot we can do on the hardware and physical side of things. Hardware security would be a much smaller problem if we weren't using monolithic systems where dozens of parts have full system access through DRM (or indirectly though hardware drivers that all run in kernel space). Have a look at Qubes OS for state of the art hardware deprivileging. It requires modern hardware, software support still needs to catch up and a bigger user base, therefore this is not available in Whonix ™ yet, although it shouldn't be hard to port Whonix ™ to Qubes OS if anyone were willing to do it.

We assume the hardware is trustworthy, we assume the physical location is not compromised. We RECOMMENDED to use "clean" computers made of parts manufactured by reputable companies and to pay in cash so as to not have hardware IDs leak our identity.

To protect against forensic analysis we SHOULD on the host use full disk encryption and wipe the RAM on shutdown.

  • Implementation status: Because Whonix ™ doesn't include a host operating system yet, we RECOMMEND users to do that themselves in Whonix ™ documentation.

Further some pointers:

  • DMA is bad, IOMMU is good, avoid attaching untrusted (or any for that matter) devices on FireWire and PCI* interfaces.
  • Use open source hardware or at least open source friendly that makes specifications public. Open source means that multiple companies can implement the hardware, the Linus law of "many eyeballs and all bugs are shallow" applies, backdoors are harder to hide.
  • Simple is beautiful, complexity is the enemy Nr.1 of security. Whonix-Gateway ™ could run on very simple hardware like the Raspberry Pi.

Crypto is even worse, the only thing we can do is to use cascadesarchive.org of different cipher and hash families. This is slightly more complicated than it sounds and we don't know of any software that does it right.

As established above in "Attacker capabilities" the main attack vector we need to concern (and we can actually do something about) is software attacks.

To protect against software backdoors and vulnerabilities:

  • The source needs to be audited, preferably by yourself and many others and should ideally be bug free. Popular open source projects are your best bet. Whonix ™ is fully open source, anyone can audit and patch.
  • If you rely on others (you do), you need to make sure that you get the same code as everyone else Hash sums + signatures, the more public they are, the better. If completely transparent signing is used (as common in proprietary systems) it is much more difficult to determine that you get the same files as everybody else. By default signing is only concerned with making sure that you get code by someone with access to the key. This protects against 3rd parties but not insiders. Checksums and signatures of Whonix ™ and updates are publicly available.
  • The code you use SHOULD have a good track record, know issues must be corrected immediately. Code that doesn't get checked but has zero known problems is worse than code that has hundreds of known and patched security issues but is constantly being reviewed and worked on. However, a trustworthy code should have very few if any security issues despite having been audited thoroughly. After all, projects that constantly need security patches are constantly insecure. But it is not really about numbers, check the severity of the bugs. Linux needs many security patches but fatal flaws that would affect Whonix ™ are few and far between (i.e. remote code execution with root privileges). Fatal flaws in user facing applications (like browsers, pdf readers, media players, IRC) are frequent and therefore such software is not be part of the TCB (see below).
  • Checking the source is not enough, you need trustworthy binaries. Analyzing binaries is a lot more difficult than analyzing source code. If you rely on source code auditing you absolutely REQUIRE a trustworthy compiler (and a complete trusted system to run it on) and use DCCarchive.org. Similarly you already need a trusted system to check signature and hashes. This is the bootstrapping chicken and egg problem we mentioned earlier. Generally, to address it you need to use at least two different diverse systems that are completely independent (includes their parent build system and compiler) and set up the system in such a way that both would have to be compromised by the same adversary in a compatible way in order to result in a fatal security breach.

If the code base is huge and complex code auditing is not a realistic strategy, therefore we need to reducing complexity and attack surface by:

  • making the TCB small and
  • using privilege separation: The TCB for identity/anonymity is as small as it gets on a monolithic general purpose OS. It could be smaller (e.g. a custom kernel, busybox, uclibc instead of libc and gnu-utils) but that comes at the cost of maintainability, security updates and ease of use. (And still leaves us with a comparably huge monolithic kernel and a gigantic gcc infrastructure.) Linux/Xorg (with DAC or even MAC) is not very good with privilege separation (e.g. no trusted path). The strongest privilege separation is offered by using multiple physical computers, that's what we RECOMMEND, Physical Isolation and multiple Whonix-Workstation ™. The more practical, but less secure alternative is virtualization. For details about privilege separation between IRC/Browser/Clients and Tor see Dev/Technical Introduction

We've thought a lot about making Whonix ™ yet more secure but with the (free and open source) tools available today it doesn't seem possible. What we'd need is an isolation kernel, that would offer some strong guarantees of isolation between the different subsystems and ideally would be verified, so you don't have to worry about sophisticated backdoors (apart from the compiler, see "Trusting trust"/chicken and egg), software vulnerabilities and even some hardware backdoors and vulnerabilities (through IOMMU and microkernel design one can put hardware and firmware components, apart from the CPU, out of the TCB). For Fingerprint/Protocol leaks and the CIA Triadarchive.org we are depending on the Whonix-Workstation ™ TCB. This is a hopelessly large TCB/attack surface. For this reason we use yet another approach which can be called "security through non-persistence". That is, you should extensively make use of the clone feature and roll back of the virtual machine. Instead of privilege separation within a Tor-Workstation (e.g. by using a Mandatory Access Control framework) which would come at great usability costs, we recommend to use the coarse-grained isolation provided through multiple VMs (also see Advanced_Security_Guide#About_grsecurity).

To protect against software attacks (vulnerabilities) in particular:

  • make software hard to attack ASLR, canaries, NX, kernel hardening, (See the Debian section About Debian. Note that this makes attacks "harder" but not impossible. Hardening is no replacement for fixing bugs. In a really trustworthy system hardening would not be necessary.

To mitigate Design flaws, the primary strategy is eliminating single points of failure. You SHOULD NOT rely on Tor alone, use wifi or tunnel through another system running VPN. You SHOULD NOT rely on TLS alone, use GPG as well (and vice versa). You SHOULD NOT rely on a software monoculture, you SHOULD NOT have to rely on Debian infrastructure to never be compromised. We are currently guilty of not following this advice. To lessen the impact of a potential compromise of the update infrastructure (which includes developer machines, build systems, key security and cryptographic primitives like PRNG - remember openssl or Flame?) we do not enable automatic software updates. After all, they are essentially remote root level backdoors (although with some checks in place).

Notes about securing Confidentiality, Integrity and Authenticity[edit]

This is no different than in any other computer system without Tor. You MUST use end to end public key encryption, there is not really an alternative to that. But there are alternative implementations: TLS, SSH, GPG... End-to-End means, both ends need to be secure, one end is the Whonix-Workstation ™, which you can control and secure. In case of onion services, the Whonix-Gateway ™ is ALSO part of the TCB!

Whonix-Workstation ™ is not specifically hardened because hardening a Desktop system to a degree that makes it actually secure against a serious adversary makes the system unusable. Instead we follow the nuke and restore approach, see Multiple Whonix-Workstation ™ and Recommendation to use multiple Snapshots.

About Availability: The most resilient approach is probably a distributed data store, like Tahoe-LAFS or Freenet. I2P offers multi-homing built in. For Tor we don't know of any solutions ready for general usage.

Notes about End-to-end security of Onion Services[edit]

Moved to Notes about End to end Security of Onion Services

Attack on Whonix ™[edit]

There is a comparison of Tor Browser, Tails, Whonix ™ Standard/Download version (host+VM+VM) and Whonix ™ with Physical Isolation in chapter Attacks against Whonix ™.

Design Document, innovations and research[edit]

Whonix ™ developer adrelanos is also interested in serious design documents. There is only one design documents with respect to Tor, transparent/isolated proxying, isolation and Virtual Machines. Adrelanos has read the old design paper "A Tor Virtual Machine Design and Implementation" from 2009. After reading, adrelanos forked that paper, removed non-essential, non-security relevant stuff and made sure, all considerations from the old design paper, were implemented into Whonix ™.

The whole Whonix ™ Documentation and Design supersede the old design document. Also the TorifyHOWTOarchive.org is co-maintained by adrelanos.

Also the following documents are being monitored.

If there are any security/privacy/anonymity updates they will be considered for Whonix ™.

The Whonix ™ project also enjoys asking good questions, providing useful suggestions and creating useful things.

The following list is to keep track of all discussions and to review them in the future again, to see if something has changed.

This list is incomplete...

Whonix ™ Threat Model not available as pdf or paper version[edit]

You may skip this optional chapter.

While we read real papers, like the Onion routing design paperarchive.org and many others, we can't be coerced into creating a paper "Designing an anonymous operating system" with latex as pdf.

Like Mike Perry, developer of Tor Browser saidarchive.org "I find the pdf format heavy and unnerving from a security perspective.". That also goes for us and we also find the pdf format inconvenient. It would also take us some time to learn latex, pdf creation, formatting etiquette and so on. Time, which we rather invest into improving the design, developing and so on. The "Onion routing design paper" is now also outdated and we rather have an editable, often updated and revised website compared to a paper/pdf.

Tails does also not have a design paper as pdf and still lots of users and developers. Also TrueCrypt had not design paper about plausible deniability and still got reviewed by Bruce Schneier et al. Therefore we really don't see the benefits of having the threat model available as pdf, but if we have overlooked something, please don't hesitate telling us.