Surfing Posting Blogging

From Whonix
Jump to navigation Jump to search

About this Surfing Posting Blogging Page
Contributor maintained wiki page.
Support Status stable
Difficulty easy
Contributor HulaHooparchive.org
Support Support

Surf, Blog and Post anonymously on the Internet. Essential knowledge about Anonymous File Sharing, Keystroke / Mouse Fingerprinting and Stylometry risks. Tips for avoiding detection.

Introduction[edit]

Tor Browser is installed in Whonix by default to browse the Internet anonymously. Tor Browser is optimized for safe browsing via pre-configured security and anonymity settings that are quite restrictive. It is recommended to read the entire Tor Browser chapter for tips on basic usage before undertaking any high-risk activities.

Whonix-Workstation contains all the necessary tools to post or run a blog anonymously. It is recommended to review the following chapters / sections, as well as follow all the recommendations on this page:

Anonymous File Sharing[edit]

Audio Recordings[edit]

It is possible for adversaries to link audio recordings to the specific hardware (microphone) that is used. This has implications for shooting anonymous videos. It is also trivial to fingerprint the embedded audio acoustics associated with the particular speaker device; for example, consider ringtones and video playback in public spaces. [1] For these reasons it is recommended to follow the operational security measures in the Photographs section when sharing audio files.

This recommendation equally applies to any data that is recorded by each and every other sensor component, such as accelerometersarchive.org. [2] The best way to defend against this threat is to deny all access to the hardware in question, while also avoid the sharing of unencrypted data recorded by sensors. Similarly, it is inadvisable to share audio with third parties who have limited technical ability or if they are potentially malicious.

Documents[edit]

Digital watermarks are a subset of the science of steganographyarchive.org and can be applied to any type of digital media, including audio, pictures, video, texts or 3D models. [3] In basic terms, covert markers are embedded into the "noise" of data which are imperceptible to humans: [4]

Digital watermarking is defined as inserted bits into a digital image, audio or video file that identify the copyright information; the digital watermarking is intended to be totally invisible unlike the printed ones, bits are scattered in different areas of the digital file in such a way that they cannot be identified and reproduced, otherwise the whole goal of watermarking is compromised.

A digital watermark is said to be robust if it remains intact even if modifications are made to the files. [5] [6] In addition to protecting copyright, another watermarking goal is to trace back information leaks to the specific source. A good countermeasure to this threat is to run documents through an optical character recognition (OCR)archive.org reader and share the output instead.

According to a talk by Sarah Harrison from WikiLeaks, [7] source tracing can also happen through much simpler techniques such as inspecting the access lists for the materials that have been leaked. For example, if only three people have access to a set of documents then the hunt is narrowed down considerably.

Redacting identifying information in electronic documents by means of image transformation (blurring or pixelization) has proven inadequate for concealing the intended text; the words can be reconstructed by machine learning algorithms. Solid bars are sufficient but they must be large enough to fully cover the original text. Otherwise, clues are left about the length of underlying word(s) which makes it easier to infer the censored text based on the sentence remainder. [8] Only digital redaction bars are recommended as manual Sharpie ones can be insufficient, leading to leaks when documents are scanned.[9]

Photographs[edit]

Every camera's sensor has a unique noise signature because of subtle hardware differences. The sensor noise is detectable in the pixels of every image and video shot with the camera and could be fingerprinted. In the same way ballistics forensics can trace a bullet to the barrel it came from, the same can be accomplished with adversarial digital forensics for all images and videos. [10] [11] Note this effect is different from file Metadata that is easily sanitized with the Metadata Anonymization Toolkit v2 (MAT2). [12]

Photo-Response NonUniformity[edit]

A camera fingerprint arises for the following reason: [13]

Photo-Response NonUniformity (PRNU) is an intrinsic property of all digital imaging sensors due to slight variations among individual pixels in their ability to convert photons to electrons. Consequently every sensor casts a weak noise-like pattern onto every image it takes and this pattern plays the role of a sensor fingerprint.

The reason for this phenomenon is all devices have manufacturing imperfections that lead to small variation in camera sensors, causing some pixels to project colors a little brighter or darker than normal. When extracted by filters, this leads to a unique pattern. [14] Simply put, the type of sensor being used, along with shot and pattern noise leads to a specific fingerprint.

The threat to privacy is obvious: if the camera reference pattern can be determined and the noise of an image is calculated, a correlation between the two can be formed. For example, recent researcharchive.org suggests that only one image is necessary to uniquely identify a smartphone based on the particular PRNU of the built-in camera's image sensor. [15] Major data mining corporations are starting to use this technique to associate identities of camera owners with everything or everyone else they shoot. [16] It follows that governments have had the same capabilities for some time now and can apply them to their vast troves of data.

There are methods to destroy, forge or remove PRNUarchive.org, but these should only be used with caution. The reason is related research on the question of spoofing sensor fingerprints in image files has proven non-trivial and easily defeated. [17] [18]

Other Vectors[edit]

Other unique camera identifiers include specific JPEG compression implementation, distinct pattern of defective pixels (hot/dead), focal length and lens distortion, camera calibration and radial distortion correction, distribution of pixels in a RAW image and statistical tests such as Peak to Correlation Energy (PCE) ratio. [19]

Operational Security Advice[edit]

This section assumes the user wants to preserve their anonymity, even when publicly sharing media on networks that are monitored by the most sophisticated adversaries on the Internet. Always conduct a realistic threat assessment before proceeding. These steps do not apply for communications that never leave anonymous encrypted channels between trusted and technically competent parties.

Table: Operational Security Advice

Domain Recommendation
Current Devices It is almost a certainty that photos and videos have been shared from your current devices through non-anonymous channels. Do not use any of these devices to shoot media that will be shared anonymously.
Suitable Devices

Most will probably want to avoid phones altogether and use tablets instead, but for most situations phones are a reasonable choice:

  • Buy a new Android phone with cash if possible.
  • Avoid other choices because a proprietary operating system is a nonstarter.
  • In all cases flash a freedom and privacy-respecting ROM before using the camera. Be aware that the glorified corporate malware that comes pre-installed on the phone will leak a range of data to the cloud.
Safe Use
  • The camera must only be reserved for anonymous media.
  • Do not commit serious mistakes like taking "selfies" or photographing places or people associated with you.
  • Sanitize metadata with MAT2 before sharing photographs anonymously online.
  • Completely obscure faces with solid fills using an image manipulation program. Advancements in neural nets and deep machine learning make pixelated or gaussian blurred faces reconstructable. [20] [21]
  • Consider using the ObscuraCamarchive.org app from The Guardian Project to protect the identities of protestors: [22]
    • It pixelates images using a technique resistant to facial reconstruction.
    • ObscuraCam also offers a full pixel removal "black bar" option.

Keystroke Fingerprinting[edit]

Introduction[edit]

Keystroke biometric algorithms have advanced to the point where it is viable to fingerprint individuals based on soft biometric traits. This is a privacy risk because masking spatial information -- such as the IP address via Tor -- is insufficient for anonymity. [23]

Unique fingerprints can be derived from various dynamics: [24]

  • Typing speed.
  • Exactly when each key is located and pressed (seek time), how long it is held down before release (hold time), and when the next key is pressed (flight time).
  • How long the breaks/pauses are in typing.
  • How many errors are made and the most common errors produced.
  • How errors are corrected during the drafting of material.
  • The type of local keyboard that is being used.
  • The likelihood of being right or left-handed.
  • Rapidity of letter sequencing indicating the user's likely native language.

A unique neural algorithm generates a primary pattern for future comparison. It is thought that most individuals produce keystrokes that are as unique as handwriting or signatures. This technique is imperfect; typing styles can vary during the day and between different days depending on a person's emotional state and energy level. [24]

Unless protective steps are taken to obfuscate the time intervals between key press and release events, it is likely most people can be deanonymized based on their keystroke manner and rhythm biometrics; see Obfuscating Keystroke Time Intervals to Avoid Identification and Impersonationarchive.org. Adversaries are likely to have samples of clearnet keystroke fingerprinting which they can compare with "anonymous" Tor samples. One basic precaution is to avoid typing into browsers with JavaScript enabled, since this enables this deanonymization vector. Text should be written in an offline text editor and then copied and pasted into the web interface when it is complete.

There are several related anonymity threats that need to be considered. For instance:

Keystroke Anonymization Tool (kloak)[edit]

Info Platform Specific Notice:

Overview[edit]

kloak is designed to stymie adversary attempts to identify and/or impersonate users' biometric traits. The GitHub site succinctly describes kloak's purpose and the tradeoff between usability and the level of privacy. Notably, shorter time delays between keystrokes and release events reduces overall anonymity: [26]

kloak is a privacy tool that makes keystroke biometrics less effective. This is accomplished by obfuscating the time intervals between key press and release events, which are typically used for identification. This project is experimental.

...

kloak works by introducing a random delay to each key press and release event. This requires temporarily buffering the event before it reaches the application (e.g., a text editor).

The maximum delay is specified with the -d option. This is the maximum delay (in milliseconds) that can occur between the physical key events and writing key events to the user-level input device. The default is 100 ms, which was shown to achieve about a 20-30% reduction in identification accuracy and doesn't create too much lag between the user and the application (see the paper below). As the maximum delay increases, the ability to obfuscate typing behavior also increases and the responsive of the application decreases. This reflects a tradeoff between usability and privacy.

While kloak makes it hard for adversaries to identify individuals or to replicate their typing behavior -- for example to overcome two-factor authentication based on keystroke biometrics -- it is not perfect:

  • Small delays are not effective; higher values that can be tolerated are preferable.
  • It does not address stylometric threats.
  • Repeated (held-down) key presses that repeat at a unique rate can lead to identification.

Testing and Interpretation[edit]

NOTE: The test website documented below (keytrac.net) seems to be permanently down. This wiki page needs yet to be updated to using one or multiple of the following tests. Help welcome!

It is recommended to test that kloak is actually working by trying an https://www.keytrac.net/en/tryout online keystroke biometrics demo. Three different scenarios are available, but "Train normal" (without kloak running) is not recommended for anonymity reasons:

  1. Train normal, test normal
  2. Train normal, test kloak
  3. Train kloak, test kloak

The KeyTrac demo allows the entering of a username and password on the enrollment page and then testing it on an authentication page. Below is a sample result and interpretation of entering a username and password without/without kloak running, with both training methods.

Table: Sample kloak Test Results

kloak Configuration Results and Interpretation
Train normal, test normal
  • trial 1: 94% accuracy identified
  • trial 2: 92% accuracy
  • trial 3: 94% ..
Train normal, test kloak
  • trial 1: 18%
  • trial 2: 15%
  • trial 3: 19%
Train kloak, test kloak
  • trial 1: 40%
  • trial 2: 42%
  • trial 3 36%

From the first test set it is evident that without kloak, users can be identified with a high degree of certainty. The second test set demonstrates that kloak definitely obfuscates typing behavior, making it difficult to authenticate or identify a particular user. Finally, the third set evidences that users who run kloak may look "similar" to one another. That is, it might be possible to identify kloak users from non-kloak users; if this is true, then the anonymity set will increase as more users start running kloak.

Mouse Fingerprinting[edit]

Mouse or cursor tracking occurs when software collects the positions of the mouse cursor and click data on the computer. While this can have benefits for web designers, it also poses a privacy (profiling) threat. Without explicit user consent or awareness, a range of data can be leaked via JavaScript, plug-ins or other software: [27]

  • JavaScript and Cascading Style Sheets (CSS) [28] readily allow developers to track users' mouse movements by simply entering relevant code on the webpage. This has already been employed on high-traffic websites, such as search engines.
  • Similar to JavaScript, installed and enabled software modules (plug-ins) can track mouse movements.
  • Specific mouse tracking software can reveal:
    • mouse location
    • time stamps
    • mouse clicks
    • a mouse cursor hovering over embedded links and its duration
    • the amount of time spent in certain webpage areas
    • heat maps
    • full playbacks which retrace the mouse's trajectory
    • mouse wheel tracking [29] [30]

For a practical example of deanonymization, consider someone who regularly uses both clearnet and Tor with JavaScript enabled. Individuals have distinctly unique characteristics associated with mouse movements and mouse clicks. Therefore, if these research methods are used in the public domain, supervised learning methods are likely to "learn" the typical behavior of individuals. Over time, it may be possible to link "anonymous" activities with the known profile of a clearnet user with a high degree of probability. [27] [31]

Disabling JavaScript cannot mitigate this as it can also be done with just CSS. [32] The author of the kloak software tool has noted high accuracy device fingerprinting can be performed with DOM event timestamps and this affects both keyboard and mouse events.

A potential solution is being tested which involves slights delays of mouse events to throw off phase estimation. [33] [34]

Covert Impairments in Human Computer Interaction[edit]

Recent researcharchive.org on deceptive input modifications - where a site deliberately misrepresents mouse movements or key presses to elicit a corrective user reaction - reveals that this reaction is apparently fingerprintable. This tactic is used in CAPTCHAs and site logins. Possible mitigations involve detecting third party meddling with inputs (since it is an active process) and applying anti-fingerprinting protections on the fly.

Stylometry[edit]

Whonix does not obfuscate an individual's writing style. Consequently, unless precautions are taken (see below), users are at risk from stylometric analysis based on their linguistic stylearchive.org. Research suggests only a few thousand words (or less) may be enough to positively identify an author and there are a host of software tools available to conduct this analysis.

This technique is used by advanced adversaries to attribute authorship to anonymous documents, online texts (web pages, blogs etc.), electronic messages (emails, tweets, posts etc.) and more. The field is dominated by A.I. techniques like neural networks and statistical pattern recognition, and is critical to privacy and security. Current anonymity and circumvention systems are focused on location-based privacy, but ignore leakage of identification via the content of data which has a high accuracy in authorship recognition (90%+ probability). [35]

There are multiple ways to conduct statistical analysis on "anonymous" texts, including: [35] [36]

  • keystroke fingerprinting, for example in conjunction with Javascript
  • stylistic flourishes
  • abbreviations
  • spelling preferences and misspellings
  • language preferences
  • word frequency
  • number of unique words
  • regional linguistic preferences in slang, idioms and so on
  • sentence/phrasing patterns
  • word co-location (pairs)
  • use of formal/informal language
  • function words
  • vocabulary usage and lexical density
  • character count with white space
  • average sentence length
  • average syllables per word
  • synonym choice
  • expressive elements like colors, layout, fonts, graphics, emoticons and so on
  • analysis of grammatical structure and syntax

Fortunately, research suggests that by purposefully obfuscating linguistic style or imitating the style of other known authors, this is largely successful in defeating all stylometric analysis methods. This means they are no better than randomly guessing the correct author of a document. However, using automated methods like machine translation services does not appear to be a viable method of circumvention. [35]

Tips for Anonymous Posting, Blogging and Uploading[edit]

Before undertaking any anonymous activities, be sure to understand and exercise a healthy dose of Operational Security (OpSec). Even the best anonymity software available today cannot prevent catastrophic mistakes by individuals.

Blogging[edit]

Table: Blogging Tips

Activity Tips
Activity Partitioning Separate all online activities and only use a dedicated email address for the blog.
Blog Administration Usually the blog is administrated via a web interface only. Use Tor Browser for all blog activities.
Blog Posting Every type of blog software offers the option to select a point in time when new postings are published. It is safer to delay the publishing of new posts to a time when you are not online anymore, rather than publishing immediately. [37]
Email Address Registration

For anonymous blogs hosted on third-party services, register it with a new and anonymous e-mail address (see E-Mail) that has never been used before and which has been exclusively paired with Tor for logins and other related activity: [38]

  • Different Providers: The blog can be registered with different providers anonymously; for example, to utilize https://wordpress.com/archive.org
  • Payments: If using a premium product, keep the option open to pay anonymously.

Browser Input[edit]

A browser is an unsafe environment to directly write text, regardless of whether it is a forum post, email, webmail or IMAP-related reply.

Table: Browser Input Tips

Activity Tips
Accidental Searches Text can be accidentally pasted into the search or URL bar, which triggers an unintended search across the public internet.
JavaScript

With JavaScript enabled, user behavior can be tracked and profiled as already noted in this chapter:

  • Keystroke Fingerprinting.
  • Mouse Fingerprinting. See additional footnotes. [39] [40]
  • When the methods above are combined with Stylometry, a user will be de-anonymized unless countermeasures are implemented, like faking one's authorship style [41] and confusing stylometry with a spell checker. [42]
  • Tor Browser defenses are based on skewing JavaScript's perception of time. [43] [44] The packaging of kloakarchive.org (keystroke-level online anonymization kernel) in Whonix provides a system-wide solution for keystroke and mouse profiling.
Text Editors
  • Offline Editors: It is recommended to prepare text in an offline text editor like KWrite and then copy and paste the content into the web interface once finished.

Hardware Threat Mitigation[edit]

Table: Hardware Threat Tips

Category Tips
Disable Dangerous Peripherals
  • Speakers / Microphones: It is advisable to shut off the speakers and microphone at all times, as newer methods of advertisement tracking can link multiple devices via ultrasound covert channels. [45] It is also possible to decrease the risk by playing video and audio from untrusted sources with headphones connected and adjusted at a low volume [46] Speakers may also be repurposed into microphones by malicious software running on compromised devices. [47]
  • HDDs: Acoustic signals can be used to cause rotational vibrations in HDD platters in an attempt to create failures in read/write operations, ultimately halting the correct operation of HDDs.[48]
  • Active microphones open up the system to the vulnerability of running either humanly unintelligible[49] or inaudible [50] acoustic commands if a virtual assistant is installed. Sound can be maliciously weaponized to degrade performance and damage MEMS gyroscopes and accelerometers.[51]
  • Sensors: Gyroscopes can be used to discern the speech of the talker in its vicinity[52] while accelerometers can be used to reconstruct the responses of the callee and virtual assistant played through the loudspeaker.[53]
  • LCD Coil Whine: The coil whining of LCD screens is unique enough to leak the information presented on the screen as reconstructed by machine learning applied on wiretapped data (leaked via the webcam microphone). [54] A solution is to turn off the device with sensitive data while chatting on the phone.
Remove External Devices Remove all phones, tablets and so on from the room to avoid them issuing watermarked sounds as well as listening to keystroke sounds and watermarked sounds. [55] [56] Similarly, do not make / take calls in the same room where anonymous browsing is underway, or run sensitive applications (like Tor Browser for Android) or have documents open on the phone before calls.
Side-channel Attacks This class of attacks depend on eavesdropping on the passively leaked signals by a trusted process which a surveilling entity can use to reconstruct the sensitive data on the computer. These are more dangerous than covert-channels discussed below.
  • Energy leaks that reveal sensitive information are a long studied area of cryptography research; see footnotes. [57] [58] [59] These attacks were mitigated by software countermeasures in cryptographic libraries such as GPG.
  • There is a very crude proof-of-concept attack that remotely controls touchscreen devices via electromagnetic fields, with the caveat of it requiring close proximity to an antenna. This shows the possibility of manipulating devices using TEMPEST like attacks and not just spying on them [60] [61]
  • Power LED Side-Channel Attack: A surveillance camera captures high-speed footage of the power LED on a smart card reader­—or of an attached peripheral device—­during cryptographic operations.[62] A simple mitigation is covering LED lights with electrical tape.
Covert-channel Attacks In contrast to side-channel attacks, covert-channels depend on a compromised process operating on the machine to be able to exfiltrate data to the outside without being noticed by the machine operator. While attacks of this nature that cross security and virtualization boundaries on the same machine are known, this section covers air-gapped machines which are the hardest targets to penetrate from the attacker's perspective. While using SSD PCs are a solution to many of these attacks, they bring another set of problems regarding the impossibility of secure data erasure.
  • Attacks include leaking sensitive data from an air-gapped machine via a computer's fan noise[63], hard drive noise [64], hard drive LED[65], CPU frequency patterns[66] or scanner light.[67] The data is leaked to a compromised security cam, cellphone or even drone flying outside the building window.
Wi-Fi Signal Emitters Another keystroke snooping technique involves a WiFi signal emitter (router) and malicious receiver (laptop) that detects changes in the signal that correspond to movements of the target's hands on their keyboard. According to researchers, a user’s movement over the keyboard generates a unique pattern in the time-series of Channel State Information (CSI) values. A Wi-Fi signal based keystroke recognition system called WiKey can recognize typed keys based on CSI values at the Wi-Fi signal receiver end using Commercial Off-The-Shelf Wi-Fi devices. In real-world testing, “WiKey achieves an average keystroke recognition accuracy of 77.43% for typed sentences when 30 training samples per key were used. WiKey achieves an average keystroke recognition accuracy of 93.47% in continuously typed sentences with 80 training samples per key,”. Limitations, include variations in environment, as it can work well only under relatively stable environments. Human motion in surrounding areas, changes in orientation and distance of transceivers, typing speeds, and keyboard layout and size also influence the accuracy.[68][69][70][71]

User Habits[edit]

Table: User Habit Tips

Category Tips
CAPTCHA / reCAPTCHA Google's CAPTCHAs fingerprint the behavior of individuals. [72] Only enable JavaScript if it must absolutely be solved. [73] [74]
Cookies Remember to purge the browser's cookie and history cache periodically. When running Tor Browser, it is recommended to simply close Tor Browser after online activities are finished, then restart it.
Environment
  • Avoid public places where people are likely to shoulder surf or where CCTV cameras are deployed. It is also possible to reconstruct text reflections in eyeglasses from 720p cams. [75] When having sensitive conversations, even over encrypted channels, consider that AI is capable of reading lips when captured on video footage. [76] Even if your hands are concealed while typing, it is possible for AI to infer with high accuracy, what words are being typed by analyzing shoulder movements. [77] Heat traces from a recently typed password could be captured by thermal cameras in the vicinity of your hardware. [78]
  • External Devices: Avoid typing in places where open microphones are used, otherwise recorded keyboard sounds might provide enough information to accurately reconstruct what was typed. [79] The minute vibrations of tapping touchscreens are detectable by virtual assistant microphones in a range of 0.5 meters even in the midst of noisy environments. Note that haptic feedback wasn't enabled. [80] [81]
File Sanitization

Generally, any blog pictures, documents or other files must have unique Metadata removed (anonymized) before they are uploaded - check the file format is compatible with the MAT2 software: [82]

Passwords
  • Detection: Although a remote threat, thermal imaging can capture body heat remains from keys touched to input passwords up to one minute after the fact. [83] Also avoid places with CCTV or those which risk shoulder surfing.
  • Generation: Use random usernames and strong Diceware passwords for anonymous accounts. pwgen should only be used to generate usernames and not passphrases because its emphasis is on generating phonemes; see footnote. [84]
  • Retention: Consider the password-retention policy of the browser. If it supports a master password that encrypts every password it saves, then use that feature. It is generally safest not to save any blog or other passwords in the browser, but instead use a password manager and cut and paste passwords into the browser.
Pseudonym Isolation For advanced separation of discrete activities, use Multiple Whonix-Workstation.
Publishing Time Over time, pseudonymous activity can be profiled to provide an accurate estimate of the timezone, reducing the user's anonymity set. It is better to restrict posting activity to a fixed time that fits the daily activity pattern of people across many places.
Tor Browser Censorship In most cases, Tor blocks by destination servers can be easily bypassed with simple proxies.

See Also[edit]

Footnotes[edit]

  1. Do You Hear What I Hear? Fingerprinting Smart Devices Through Embedded Acoustic Componentsarchive.org
  2. Mobile Device Identification via Sensor Fingerprintingarchive.org.
  3. For detailed information on this topic, see: Steganography and Digital Watermarkingarchive.org.
  4. https://www.daoudisamir.com/steganography-and-watermarking/archive.org
  5. https://en.wikipedia.org/wiki/Digital_watermarkingarchive.org
  6. Notably the watermark does not change the size of the carrier signal.
  7. Missing footnote.
  8. On the (In)effectiveness of Mosaicing and Blurring as Tools for Document Redactionarchive.org
  9. https://www.schneier.com/blog/archives/2023/06/redacting-documents-with-a-black-sharpie-doesnt-work.htmlarchive.org
  10. https://dde.binghamton.edu/download/camera_fingerprint/archive.org
  11. Fingerprintable Camera Anomaliesarchive.org
  12. While MAT2 does clean a wide range of files, the list of supported file formats is not exhaustive. Also, the author of MAT notes embedded media inside of complex formats might not be cleaned.
  13. https://www.slideshare.net/justestadipera/digital-image-forensics-camera-fingerprint-and-its-robustnessarchive.org
  14. https://www.futurity.org/smartphones-cameras-prnu-1634712-2/archive.org
  15. The error rates is less than 0.5%
  16. https://www.google.com/patents/US20150124107archive.org
  17. Sensor Noise Camera Identification: Countering Counter-Forensicsarchive.org
  18. Anonymizing the PRNU noise pattern of pictures remains a promising area of research.
  19. https://link.springer.com/article/10.1007/s11042-019-08182-zarchive.org
  20. https://github.com/david-gpu/srez/blob/master/README.mdarchive.org
  21. Defeating Image Obfuscation with Deep Learningarchive.org
  22. https://lists.mayfirst.org/pipermail/guardian-dev/2016-September/004895.htmlarchive.org
  23. https://github.com/vmonaco/keystroke-obfuscationarchive.org
  24. 24.0 24.1 https://en.wikipedia.org/wiki/Keystroke_dynamicsarchive.org
  25. https://github.com/vmonaco/kloakarchive.org
  26. 27.0 27.1 https://en.wikipedia.org/wiki/Mouse_trackingarchive.org
  27. https://en.wikipedia.org/wiki/CSSarchive.org
  28. https://web.archive.org/web/20221022104548/http://jcarlosnorte.com/assets/fingerprint/archive.org
  29. https://web.archive.org/web/20221114004700/http://jcarlosnorte.com/security/2016/03/06/advanced-tor-browser-fingerprinting.htmlarchive.org
  30. This deanonymization technique is likely to succeed, since it is already used to lock persons out of secure accounts (pending identity verification) when their monitored behavior significantly deviates from behavior that has been learned.
  31. https://web.archive.org/web/20190510191716if_/https://twitter.com/davywtf/status/1124146339259002881archive.org
  32. https://github.com/vmonaco/kloak/issues/7#issuecomment-893817507archive.org
  33. 8-16ms should be enough for this purpose.
  34. 35.0 35.1 35.2 https://web.archive.org/web/20160304062339/https://www.cs.drexel.edu/~sa499/papers/adversarial_stylometry.pdfarchive.org
  35. https://en.wikipedia.org/wiki/Stylometryarchive.org
  36. This will trick lesser adversaries, who cannot force the blog service provider to reveal exactly when and for how long a blog administrator logged in. This will not fool the blog service provider nor an adversary capable of recording all internet traffic.
  37. Do not use personal or identifying data as part of the account creation.
  38. As noted in the Mouse Fingerprinting section, mouse movements are another biometric profiling vector and disabling JavaScript is therefore recommended. High accuracy is achieved in limited situations, such as active authentication during log-on.
  39. This does not clear EU false positive requirements however, so they recommend it is combined with keystroke dynamics for extra confirmation, see: User re-authentication via mouse movementsarchive.org, On Using Mouse Movements as a Biometricarchive.org and http://www.cs.wm.edu/~hnw/paper/ccs11.pdfarchive.org
  40. For instance, stylometry works with less data (final text only) and in concert with keystroke fingerprinting is completely effective. An adversary can compare statistics about a user's typing over clearnet, then compare it to texts composed over Tor in real-time.
  41. For example, launch KWrite: Start menu buttonApplicationsUtilitiesText Editor (KWrite). Once KWrite is open, click on ToolsAutomatic spell checking. Misspelled words will be underlined with a red color.
  42. https://gitlab.torproject.org/legacy/trac/-/issues/19186archive.org
  43. User Behaviorarchive.org

    While somewhat outside the scope of browser fingerprinting, for completeness it is important to mention that users themselves theoretically might be fingerprinted through their behavior while interacting with a website. This behavior includes e.g. keystrokes, mouse movements, click speed, and writing style. Basic vectors such as keystroke and mouse usage fingerprinting can be mitigated by altering Javascript's notion of time. More advanced issues like writing style fingerprinting are the domain of other tools.

  44. This deanonymization technique works by playing a unique sound inaudible to human ears which is picked up by the microphones of untrusted devices. Watermarked audible sounds are equally dangerous, which means that hardware incapable of ultrasound is an ineffective protection.
  45. https://www.schneier.com/blog/archives/2015/11/ads_surreptitio.htmlarchive.org
  46. SPEAKE(a)R: Turn Speakers to Microphones for Fun and Profitarchive.org
  47. Acoustic Denial of Service Attacks on Hard Disk Drivesarchive.org
  48. Hidden Voice Commandsarchive.org, Cocaine Noodles: Exploiting the Gap between Human and Machine Speech Recognitionarchive.org
  49. Inaudible Voice Commandsarchive.org, DolphinAtack: Inaudible Voice Commandsarchive.org
  50. Rocking Drones with Intentional Sound Noise on Gyroscopic Sensorsarchive.org, WALNUT: Waging Doubt on Integrity of MEMS Accelerometers with Acoustic Injection Attacksarchive.org
  51. Gyrophone: Recognizing Speech from Gyroscope Signalsarchive.org
  52. Accelerometer-based smartphone eavesdroppingarchive.org, Spearphone: Motion Sensor-based Privacy Attack on Smartphonesarchive.org, Learning-based Practical Smartphone Eavesdropping with Built-in Accelerometerarchive.org
  53. https://arstechnica.com/information-technology/2018/08/researchers-find-way-to-spy-on-remote-screens-through-the-webcam-mic/archive.org
  54. https://www.newscientist.com/article/2110762-your-homes-online-gadgets-could-be-hacked-by-ultrasound/archive.org
  55. https://gitlab.torproject.org/legacy/trac/-/issues/20214archive.org
  56. Stealing Keys from PCs using a Radio: Cheap Electromagnetic Attacks on Windowed Exponentiationarchive.org: Extraction of secret decryption keys from laptop computers, by non-intrusively measuring electromagnetic emanations for a few seconds from a distance of 50 cm. The attack can be executed using cheap and readily-available equipment: a consumer-grade radio receiver or a Software Defined Radio USB dongle.
  57. Another attack involves measuring acoustic emanations: RSA Key Extraction via Low-Bandwidth Acoustic Cryptanalysisarchive.org.
  58. A poor man's implementation of TEMPEST attacks (recovering cryptographic keys by measuring electromagnetic emissions) using $3000 worth of equipment was proven possible from an adjacent room across a 15cm wall. These attacks were only possible for adversaries with nation-state resources for the past 50 years. See: CDH Key-Extraction via Low-Bandwidth Electromagnetic Attacks on PCsarchive.org
  59. https://invisiblefinger.click/assets/notrandompath/SP2021_EMI.pdfarchive.org
  60. https://www.schneier.com/blog/archives/2022/08/remotely-controlling-touchscreens-2.htmlarchive.org
  61. https://www.schneier.com/blog/archives/2023/06/power-led-side-channel-attack.htmlarchive.org
  62. https://web.archive.org/web/20170227052456/https://www.cio.com.au/article/602415/researchers-steal-data-from-pc-by-controllng-noise-from-fans/archive.org, Fansmitter: Acoustic Data Exfiltration from (Speakerless) Air-Gapped Computersarchive.org
  63. https://www.computerworld.com/article/3106862/sounds-from-your-hard-disk-drive-can-be-used-to-steal-a-pcs-data.htmlarchive.org, [https://arxiv.org/ftp/arxiv/papers/1608/1608.03431.pdfarchive.org DiskFiltration: Data Exfiltration from Speakerless Air-Gapped Computers via Covert Hard Drive Noise]
  64. https://www.computerworld.com/article/3173370/a-hard-drives-led-light-can-be-used-to-covertly-leak-data.htmlarchive.org, Leaking (a lot of) Data from Air-Gapped Computers via the (small) Hard Drive LEDarchive.org
  65. https://www.wired.com/story/air-gap-researcher-mordechai-guri/archive.org ODINI : Escaping Sensitive Data from Faraday-Caged, Air-Gapped Computers via Magnetic Fieldsarchive.org
  66. https://www.schneier.com/blog/archives/2017/04/jumping_airgaps.htmlarchive.org, Oops!...I think I scanned a malwarearchive.org
  67. Keystroke Recognition Using WiFi Signalsarchive.org
  68. https://web.archive.org/web/20221022104553/https://cse.msu.edu/~alikamr3/pdf/jsac2017_doublecolumn.pdfarchive.org
  69. https://www.securityweek.com/researchers-use-wifi-signals-read-keystrokesarchive.org
  70. In the paper: An attack variant using USRP (cellphone radio ranges) has performed poorly because of background energy interference due to microwave ovens, refrigerators, and televisions.
  71. https://www.quora.com/Why-cant-bots-check-%E2%80%9CI-am-not-a-robot%E2%80%9D-checkboxes/answer/Oliver-Emberton?share=1archive.org
  72. https://web.archive.org/web/20190806062009/http://scraping.pro/no-captcha-recaptcha-challenge/archive.org
  73. CAPTCHAS also directly enhance the strike capabilities of military drones, see: https://joeyh.name/blog/entry/prove_you_are_not_an_Evil_corporate_person/archive.org
  74. https://www.schneier.com/blog/archives/2022/09/leaking-screen-information-on-zoom-calls-through-reflections-in-eyeglasses.htmlarchive.org
  75. https://arxiv.org/pdf/1611.01599.pdfarchive.org
  76. https://www.schneier.com/blog/archives/2020/11/determining-what-video-conference-participants-are-typing-from-watching-shoulder-movements.htmlarchive.org
  77. [ThermoSecure: Investigating the effectiveness of AI-driven thermal attacks on commonly used computer keyboards], https://www.schneier.com/blog/archives/2022/10/recovering-passwords-by-measuring-residual-heat.htmlarchive.org
  78. This is a variation of an older attack perfected during the Cold War where recorded typewriter sounds allowed discovery of what was typed. See: https://freedom-to-tinker.com/2005/09/09/acoustic-snooping-typed-information/archive.org and https://www.schneier.com/blog/archives/2016/10/eavesdropping_o_6.htmlarchive.org
  79. https://arxiv.org/pdf/2012.00687.pdfarchive.org
  80. https://arxiv.org/pdf/1903.11137.pdfarchive.org
  81. Such as unique camera IDs and often GPS coordinates in the case of photographs.
  82. https://www.schneier.com/blog/archives/2018/07/recovering_keyb.htmlarchive.org
  83. Such a bias means the program does what it is designed to do: produce pronounceable passwords rather than pure line noise. Even with the secure option -s it has been noted that it produces passwords with bias towards numbers and uppercase letters to make password checkers happy. The CVE to fix this was rejected and the behavior was not corrected by the authors. This is undesirable for creating true random output, see: pwgen: Multiple vulnerabilities in passwords generationarchive.org.

License[edit]

Gratitude is expressed to JonDosarchive.org for permissionarchive.org to use material from their website. The Surfing, Posting, Blogging page contains content from the JonDonym documentation Surfing and Bloggingarchive.org page.

We believe security software like Whonix needs to remain open source and independent. Would you help sustain and grow the project? Learn more about our 12 year success story and maybe DONATE!