[Whonix-devel] Argon2id Security Margin Calculation
solar at openwall.com
Sat Sep 22 20:36:20 CEST 2018
On Sat, Sep 22, 2018 at 03:29:00PM +0000, procmem wrote:
> Best practice says to use a diceware passphrase that has equivalent
> entropy to the master-key. Since Grovers cuts down passphrase strength
> by half then a 256-bit diceware phrase is needed for future-proofing but
> that becomes unwieldly as 20 words are necessary. The only realistic
> option is to use keystretching to make passphrase length manageable.
To me, best practice is to use a reasonably affordable combination and
balance of key stretching (your job) and passphrase complexity (users'
responsibility, but you need to provide them with some guidance).
"Reasonably affordable" key stretching for that use case is something
that runs for e.g. 1 second. More than that is rarely a good idea for
everyday usage - you could type in an extra word instead of having to
wait for more than 1 second. Also, it probably means using something
like 1 GB of RAM, so that it works on current typical computers and VMs.
Then you see how many Diceware words or whatever you still need in that
passphrase in order to achieve a certain security level.
Regarding post-quantum, I'm no expert but realistically I'd just assume
that Argon2id will remain too complicated to implement and correctly
compute on a quantum computer in the foreseeable future, if ever. So
it's not a matter of how much stretching it provides - it primarily
The stretching per se is thus against non-quantum computers. It goes
either as the number of operations (like Steve tried to calculate)
relative to a certain fast cryptographic hash, or as the product of that
and the memory usage (or as square of the number of operations if you
set t=1 to maximize the memory usage per time elapsed). Which is these
metrics or something inbetween is relevant depends on the kind of attack
hardware - whether it has the required memory anyway (a typical
computer) or the memory would cost extra (ASIC) or the memory is
available anyway but its size and latency are limiting what the attacker
can do (GPU).
> I was looking for advice on how to accurately calculate the security
> margin of argon2id against nation-state adversaries with a lot of
> computing power (of every type). The hashing implementation is the one
> included in Debian (as of Buster) LUKS2 with AES-256 XTS.
I think "security margin" means a different thing than what you ask
about in the message, so I suggest you avoid this wording.
> I've been trying to find an answer to this question by reading through
> the literature on argon2 with no success. Many people say it's hard so a
> non-cryptographer like me stands no chance understanding this. I asked
> JP Aumasson and he recommended I talk to you. Steve Thomas gave me the
> estimate quoted below but he also advised me to ask you.
> Can you please share an equation and show me how to plug in the numbers
> to calculate the entropy added when any of the parameters are changed?
> "2^27 < entropy < 2^35" for Argon2id m=1GiB, i=50, p=4.
I don't know what baseline hash Steve's estimate is against. Maybe the
uncertainty of this estimate is in part because the baseline is not
clearly defined - just something fast, which could be e.g. a round of
BLAKE2 or the whole BLAKE2.
More importantly, I doubt most of your userbase would be able to
consider such nuance without prior knowledge, nor do they need that.
So, no, you do not need "to accurately calculate" this. You could have
some accurate formula and figure, but without context and understanding
its nuances it'd be meaningless.
Anyway, taking your parameters m=1GiB, t=50 we have this for the number
of 1 KiB block operations:
20+log2(50) = ~26 bits
Each block consists of 16 BLAKE2b round computations (2 sequential sets
of 8), which is on par with 12 that BLAKE2b normally uses. We could
also consider the change from BLAKE2b to BlaMka, which will possibly
give Steve's minimum of 27 or more, but that's not obviously right -
this way of calculation (of the number of operations only) is most
relevant for typical computers, and on them BlaMka is about as fast as a
normal BLAKE2b round.
Now, going for the product of number of operations and memory usage
(relevant on attack hardware where memory is costly, which fits your
mentioned threat model with nation-state attackers potentially with
ASICs) we'd have:
20+20-log2(8*4)+log2(50) = ~41 bits
Again, this is against a rather arbitrary baseline now also of 1 KiB of
memory - but it's a realistic amount of memory that a fast cryptographic
hash not intended for key stretching could have used.
I subtract the parallelism (8x within a block and 4x thread-level p=4),
because for this sort of attacks the parallelism reduces the duration
for which the memory has to be occupied. On the other hand, this is
where the presumed additional slowness of BlaMka compared to a regular
BLAKE2b round on specialized hardware could finally matter. That could
give 3 or so bits more than the above estimate relative to BLAKE2b.
Then, I think t=50 is excessive. It isn't even the equivalent of a one
word longer passphrase, but it takes more time than typing a word would.
Perhaps you should consider t=3. That will be only "4 bits" less,
giving you something like 22 to 40 bits (depending on attack hardware)
of stretching relative to BLAKE2b.
> *I saw somehwere that increasing CPU cost lessens the effectiveness of
> memory cost and vice versa, is this how it works?
Not exactly. It doesn't "lessen the effectiveness of memory cost", but
it may limit what memory cost you'd use to fit in your time budget.
This usually applies in password hashing scenarios, and not in your KDF
scenario where you got ample time (1 second or more). Also, no, not
A higher memory cost is more valuable to have than a higher time cost
parameter, because higher memory cost also implies more time spent.
Summary: you can use something like m=1 GiB, t=3, p=4. You can estimate
that this gives you stretching equivalent to 22 to 40 bits of Shannon
entropy against conventional computers (and especially against custom
hardware, which would otherwise have a greater advantage), and hope that
the very use of a complicated, long-running algorithm needing this much
memory breaks implementation/computation on quantum computers.
So m=1 GiB, t=3, p=4 combined with 7 Diceware words will give you
something like ~112 to ~128 bits, where the latter figure would actually
matter against attackers who could not-too-unrealistically pull this
kind of key search off. That's probably the maximum number of Diceware
words you'd recommend your users to use in combination with those
settings. In practice, I'd be surprised if 4 words were not enough as
that's ~90 bits against this sort of attackers, and why would they care
to invest more resources in an attack? (It's maybe ~74 bits against
regular computer attackers, but those attackers are relatively weak.)
Of course, your users should feel free to add some reasonable security
margin on top of this, through adding a few more Diceware words. Here,
I'm using the words to mean what I think they mean: an extra on top of
what's known to be required to achieve the desired security level.
Also, please note and make your users aware that the use of Shannon
entropy to measure password strength applies only when the distribution
is uniform, which in practice means that it applies only to a subset of
cases of generated (including e.g. with dice) passwords/phrases (the
subset where the distribution is uniform, which it should be). For
almost all purposes, it isn't a valid metric to use for human-chosen
passwords/phrases, where the distribution is highly non-uniform.
I hope this helps.
More information about the Whonix-devel