This page is Obsolete since not needed. See Advanced_Deanonymization_Attacks for explanation.
Packet latency (as observed by an adversary outside Tor) drops significantly when the CPU is stressed as can be observed in ICMP and TCP traffic. This is caused by c-state transitions. Non-solutions: Running a stress process with a high nice-level or disabling c-state because both solutions would heavily impact battery life and CPU temperature.
The chosen solution is to add a random delay per packet to mask the this effect for best results.
- Use of /etc/NetworkManager/dispatcher.d hooks to run the tc command whenever any NIC goes up: https://askubuntu.com/questions/1111652/network-manager-script-when-interface-up
- Interface names must be filtered to exclude
virbr*devices or virtual environments and local daemons will incur a needless penalty.
- No need to react to "down" events because tc remains running in cases where the NIC goes down then up again. It declares "Error: Exclusivity flag on, cannot modify." in that situation when the command is re-run again.
- The limit parameter must be raised from the default of 1000 or else packets get dropped as traffic demand increases. 12500 covers connection speeds of up to 1Gbps.
sudo tc qdiscindicates all the default queues setup for interfaces on Linux
- Info on various qdisc filter properties: https://wiki.archlinux.org/index.php/advanced_traffic_control
Relevant Commands and Testing
- Setup a VPN connection in Whonix ™ WS then run ping <foo>.com
- Simulate CPU load with stress
ctrl + cto stop:
- Run this command for mitigation. It will mask the latency patterns induced by the
- To test with TCP ping:
- To detach tc from the interface