[OTR-dev] Fwd: Some DH groups found weak; is OTR vulnerable?

Peter Fairbrother zenadsl6186 at zen.co.uk
Fri May 29 08:51:13 EDT 2015


On 25/05/15 21:34, Gregory Maxwell wrote:
> On Fri, May 22, 2015 at 2:55 PM, Jacob Appelbaum <jacob at appelbaum.net> wrote:
[..]
>> Nevertheless, I've harbored the strong opinion for many months now that OTR should soon move to Curve25519 for key agreement,
>
> The same static group pre-computation arguments apply to using a
> static particular curve-- so saying Curve25519 isn't an answer to that
> particular class of concern.

Yes - monocultures are almost always more brittle then polycultures, 
when they fail they tend to fail all the way. They also get more 
attacker resources thrown at them.

However in cases where those factors can be allowed for, and I believe 
the use of 1536-bit prime DH in OTR is one of those cases, monocultures 
can be safer than giving people options.


> And-- not that I think group agility is a
> virtue-- its much more costly (and risky) to be group agile with EC--
> as picking suitable  groups is much harder (and techniques that
> cheaply yield groups with known high order are not currently in
> fashion) and there are tremendous performance improvements from
> specializing the software for a single group.

One slight concern I have is that the properties which make software 
optimisations possible in Curve25519 also make it easier to break 
Curve25519 than to break a randomly-chosen curve of similar order.

This is obviously true for a simple brute force attack; but I am 
concerned about more subtle weaknesses.

We know there are weaknesses and subtle unwanted structures in extension 
fields such as GF2^255.

While I know of no developed attack, I wonder whether these weaknesses 
may be extended to prime fields whose order is close to those of 
extension fields.


> OTR should be using ECC for bandwidth reasons, considering the narrow
> challenges OTR is using, not any other reason.


That is a reason for using ECC, and I agree that it is probably the only 
valid reason, as the need for extra processor resources would only apply 
to a tiny minority - basically, if a gadget otherwise capable of OTR can 
do 256-bit ECC DH it can almost always do 1536-bit prime DH.


However, it is not a reason why ECC *must* be used.



Going slightly OT ans speculative,

As to the Logjam paper, congratulations.

I wonder whether the "state level threat" of breaking common 1024-bit DH 
primes is the "major breakthrough" which NSA told Congress about a few 
years ago, for which they got all that lovely extra money.

If so, the people who in 2013 were supporting the idea of replacing 
2048-bit RSA with ubiquitous 1024-bit DH in order to provide FS look a 
bit silly ..


[ the major browsers supported 1024-bit DH but 2048-bit RSA, perhaps due 
to people mistakenly thinking that DH keys needed to be half the size of 
RSA keys - though it might be interesting to see where that rumour came 
from.

To quote Peter Gutmann:

"It's a debate between two groups, the security practitioners, "we'd 
like a PFS solution as soon as we can, and given currently-deployed 
infrastructure DH-1024 seems to be the best bet", and the theoreticians, 
"only a theoretically perfect solution is acceptable, even if it takes 
us forever to get it"." ]


.. as the only people who could partially break 2048-bit RSA were the 
major agencies (gimme the private keys sunshine, or go to jail), the 
same ones who could almost universally break 1024-bit DH, but without 
the hassle of warrants or anyone else knowing about it ..


-- Peter Fairbrother



More information about the OTR-dev mailing list