[OTR-dev] mpOTR project

Ximin Luo infinity0 at gmx.com
Tue Dec 17 20:44:24 EST 2013


On 17/12/13 17:49, Dennis Gamayunov wrote:
> On 17.12.2013 21:06, Ximin Luo wrote:
>> On 17/12/13 16:33, Dennis Gamayunov wrote:
>>> Hi, Ximin,
>>>
>>> 17.12.2013 19:41, Ximin Luo пишет:
>>>> Haven't had time to read through the wiki yet, but just wondering,
>>>> what are your ideas on deniability? Some of us want to drop this
>>>> property because it's really not that strong[1], and requiring it
>>>> makes other parts of the protocol harder / more complex. Because of
>>>> this, we also intend to drop the name "mpOTR", on the basis that
>>>> deniability and "off-the-record" can be misleading for a non-technical
>>>> user. X [1] see the otr-users thread, "The effectiveness of
>>>> deniability", starting Nov 29, last message December 06.
>>> I was not on that list, but similar subdiscussions occured here on
>>> otr-dev as well.
>>>
>>> For me personally deniability is the mpOTR' primary difference from
>>> various older secure chats (i.e. SILC, PGP-backed XMPP). If the
>>> resulting protocol lacks this feature, I would expect new projects to
>>> come to fill this niche. So, why don't we try to address the need of
>>> deniability right now, while the protocol development is still in progress?
>>>
>> As we've been talking about "mpOTR", would provide forward secrecy (not the case for PGP) and be end-to-end decentralised (not the case for SILC).
>>
>> What's your user scenario where you find OTR-deniability to be useful? You understand that if your partner (who is logging your ciphertext) is colluding with the NSA (who is logging everyone), there is no "deniability" that you could argue in court? (IANAL)
> Which arises the question - what's our adversary model for deniability?
> What are the most (potentially) frequent usecases for "we can reject a
> strong claim"? There is a wide variety of possible situations from "we
> need to pin this guy, let's look what he has on his laptop", which may
> be untargeted and accidental, to "we need to force this guy to say
> something so that we could use his words as a proof later", which is
> always targeted and hard to address with any kind of communications. If
> we recall the basic idea of OTR communications, the protocol aims at
> simulation the real-world scenario of off-the-record offline meetings,
> and in this case global surveillance agent would be out of scope, won't it?
> 

To formalise the problem (I hope I got this right):

1. All participants are honest, attacker M can perform CCA on the network.
2. Some participants A are honest, but some B are colluding with attacker M to
convince themselves of A's record.
3. Some participants A are honest, some B are colluding with attacker M to
extract a proof P of A's record, to prove it to *an honest independent party*
J, who understands that B/M may not be honest. (If J assumes B/M are honest,
this reduces to (2).)

(1) is a solved problem, (2) is impossible to defend, (3) is our deniability
scenario. In the simple scenario as Greg described above, cheatproof logging
devices don't exist. Then, weak-deniability is suitable. There are no strong
proofs, and weak proofs might all be forged, so they are useless for J.

However, in a more complex scenario that is IMO closer to the real world, even
though J understands B/M may not be honest, they might assume partial honesty -
i.e. that it is costly to forge evidence, and that the sheer amount of evidence
can count for "proof" even though it technically could be forged. Here, we want
strong-deniability.

This is why Big Brother can't just claim that all of Larry Leaker's documents
are made up - the amount of information revealed makes this claim unplausible.
It's also why anonymity is hard, because all you need there is balance of
probabilities.

Even in real-world meetings, someone can record your voice. They could forge
the recording, but a high-enough quality recording would be analogous to a
cryptographic signature - the amount of effort to produce it would be thought
of as "infeasible", and the balance of probabilities swings in the favour of
the attacker.

So, to ensure strong deniability, to be useful in the real world, we must take
into account the *cost* of forging weak proofs. Even though they are weak in a
cryptographic sense, the third party J might still count them because they are
costly to forge.

>>
>> OTR-denability is "we can reject a strong claim (a proof)". Intuitive deniability would be "we can strongly reject a claim", including metadata (hard!). Another alternative is "we can strongly claim (prove) the rejection", which is probably impossible, but I mention it for completeness, because some people get confused between the forms.
> "We can strongly claim the rejection" seems impossible, indeed. But is
> this really so, can we prove that somehow?

I tried this:

1. "I prove a rejection of P" formally means "I can provide a string S, which
other can use as the input to some verification algorithm V, that P is false,
but in reality P is actually true".
2. But any V that has this property would not be accepted as a valid
verification algorithm by anyone.

I can't see anything wrong with that line of reasoning, but it's not entirely
mathematical - "would not be accepted" is technically hand-waving.

>>> It's true that there might be very little practical difference between
>>> "digitally signed proof" and "proof with a broken digital signature" in
>>> some of the real-world scenarios. But maybe we could address these
>>> scenarios in some other way? For example, during the shutdown phase we
>>> are free to change the session transcript in arbitrary way, and
>>> communicate it between participants. These manipulations could add bogus
>>> messages into the transcript, or rework it so that user profiles in
>>> terms of language use would be identical, or enforce some other
>>> characteristics of transcript.
>>>
>>> Maybe it would be possible to keep the transcipt sceleton intact, to
>>> make it accessible and searchable for the user off-line, but modify in a
>>> way that would render impossible for the 3rd party to use it as a proof
>>> of anything.
>>>
>>> Dennis
>> You can't guarantee that others will follow protocol and edit the transcript in that way, similar to how you can't guarantee that your partner isn't logging the plaintext.
> 
> It seems you can guarantee some of these if the conversation is split
> into a series of "sessions" in terms of the protocol, with full session
> shutdown, transcript rewrite and rekeying based on negotiated rewritten
> transcript. The typical conversation lasts for minutes, tens of minutes
> and longer. Can we make very short communication phase which would last
> for seconds, and make the shutdown and rewritten transcript negotiation
> transparent for benign participants?
> 

The problem is that, whatever you try to "rewrite", the attacker can take a
copy of the original unknown to you, but the follow the rest of the protocol
and present the re-written version to you. It's impossible to know whether they
took a copy or not.

(This is why crypto-ACL schemes like Tahoe-LAFS provide immutable security
access - to "revoke read access" to something actually means to "revoke read
access to all future versions", since you can't guarantee that the attacker
doesn't take a copy, when they do have read access.)

Or, did I misunderstand you, and by "change the transcript" you meant something
else?

X

-- 
GPG: 4096R/1318EFAC5FBBDBCE
git://github.com/infinity0/pubkeys.git

-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 897 bytes
Desc: OpenPGP digital signature
URL: <http://lists.cypherpunks.ca/pipermail/otr-dev/attachments/20131218/ba805ec6/attachment.pgp>


More information about the OTR-dev mailing list