[massanity] Initial thoughts from Nathaniel

Patrik Fältström paf at cisco.com
Sun Nov 14 11:54:01 CET 2004


> I'm afraid you lose me at step a):
>
> On Nov 11, 2004, at 8:03 PM, massanity-request at lists.paf.se wrote:
>
>> (a) I don't believe we will be able to have two ways of signing 
>> message bodies in the long run. Either we have multipart/signed, or 
>> sign the bucket of bits in the message (and ignore the MIME. We will 
>> never be able to have both
>
> This is not a belief that I share.  There are *lots* of things we have 
> two ways of doing, why predict that this won't be another one?

Because they technically are very very hard to implement at the same 
time. Sure, if we had two ways of doing the same thing that could 
coexist I would not be so nervous, but I don't think it is possible for 
these things. Just think of the transformations of the body of a 
message multipart/signed can create. Or what transformations 8BITMIME 
can create.

I.e. all of our thinking in SMTP land since 1995 has been to protect 
the MIME structure and have the MIME structure not be damaged. Now with 
MASS initiatives, we once again are back to "don't change the bytes in 
the body".

But, this is exactly what I want to discuss with you who know SMTP at 
least as well as I do (in many cases much much better).

> More important, I don't think *either* of these two is the way most of 
> us have been looking at doing MASS signatures.  I think we're working 
> on a third model here, one I would characterize for lay audiences as a 
> "low-resolution signature" (by analogy to low-res graphics).  Think of 
> it not as a cryptographically signed message, but a cryptographically 
> signed *checksum* of the message, using a checksum algorithm that is 
> invariant across the kinds of whitespace shifting and line wrapping 
> that characterizes email transport.

Correct, there is a difference between the canonicalization of the 
message body itself, and what we pass to the hash function that 
calculates the signature. But, for example, is it correct to do this 
function so low-rez that it don't alarm when certain changes happen? I 
have seen ideas that for example strip 8th bit on bytes in the body, 
ignore what the has function "think" is a boundary between multipart 
messages (as the last boundary has '--' appended), and byte counters 
for how many bytes in the message is to be signed. Etc.

The problem is that the checksum algorithm will be so low-rez, and 
still not work (due to added multipart-wrappers around existing 
messages, and changes in content-transfer-encoding during flight) that 
it will not fly.

For example, I see too much of such issues with the Cisco idea, IIM. 
Every day there is a new "fun" problem.

>> (b) If we sign the bucket of bits, we destroy the ability to use 8BIT 
>> content-transfer-encoding and the 8BITMIME ESMTP extension (that lead 
>> to encoding of messages during flight in some cases).
>
> This is the sort of issue we're still grappling with.  My current 
> theory is that the "checksum" should be computed on a canonicalized 
> version of the message that undoes all transport encodings and perhaps 
> even ignores the purely syntactic elements of the MIME structure (e.g. 
> the boundary line)

So, how are we supposed to handle the case where MTA's do add 
multipart/signature wrappers already today? That add a 
multipart/signature wrapper around the message as we have it today. Are 
we 100% sure the unwrapping will leave exactly the same number of bytes 
(and the same bytes) at the other end of the "tunnel"?

I am so nervous we are optimizing either for old crap systems that can  
not handle MIME or for the new ones that do the right thing. That we 
can not do both, and that we will choose to optimize for the systems 
that are old and broken.

> Does this help at all? -- Nathaniel

Yes, it is exactly the correct direction to go with this discussion.

    paf




More information about the massanity mailing list