sks-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Sks-devel] RFE: max-*-size and strip-photo-uids


From: Jeffrey Johnson
Subject: Re: [Sks-devel] RFE: max-*-size and strip-photo-uids
Date: Sun, 27 May 2012 09:02:34 -0400

On May 27, 2012, at 7:45 AM, Robert J. Hansen wrote:

> On 5/27/12 7:10 AM, Robert J. Hansen wrote:
>> *These feature requests have clear, obvious downsides.*  (Not the least
>> of which is they won't work particularly well.)
> 
> So, the first question is -- what would be necessary for a solution to
> work well?
> 
> The brute force and overkill approach: sanity-check each imported
> certificate to ensure that the subkeys on the certificate are legitimate
> cryptographic keys.
> 
> Note: this is barking madness.  If I give you a block of bits and say
> this represents two numbers, the first being the product of two large
> primes and the second number coprime to the first -- e.g., the (n, e)
> tuple of an RSA public key -- the only way to prove it's a legitimate
> public key would be to factor the first number and ensure that pq=n with
> p and q prime, and that e is coprime to n.  If the only way to prove
> that a block of data is a correct RSA key is to break RSA, then we're
> absolutely screwed.
> 
> So much for the brute force and overkill approach.  We simply cannot
> check to ensure that an RSA public key is good.  This may leave the door
> open to checking whether an RSA public key is obviously *bad*.
> 
> To check for obviously bad keys, we could do trial divisions on all the
> primes up to, say, 10,000.  A naive encoding of binary data onto a
> purported RSA key would likely have a factor somewhere within that
> range.  Presto, we have a way to detect bad RSA keys.
> 
> Unfortunately, it's completely bogus.  Someone could simply do a naive
> encoding, then keep on adding 1 (well, 2, since they're smart enough to
> avoid even numbers) until they found a value that had no small factors.
> To recover the original number they just start subtracting 2 and repeat
> until such time as the CRC code in their data checks out.
> 
> There may be other such ways to check for bad keys, but I suspect
> they're all going to face the same problem.  I don't think there's a
> cryptologic solution to this.  It may be more worthwhile for us to look
> at the data forensics community, to see if they have any tools that can
> quickly and efficiently find media files that may be embedded inside
> other files.  (This was part of the DFRWS 2006 challenge, incidentally,
> so I know the forensics community has looked into the problem.)
> 
> Anyway.  Thoughts?  Ideas?
> 

Only a de facto definition of "bad" might be enforceable in SKS key servers

        A public key that fails to verify  its own binding signatures is most
        likely bad.

This de facto definition might be attempted when a pubkey is added
and reject public keys that fail self certification in binding signatures.

OTOH, there are damaged pre-existing public keys in SKS key servers.
these need to be preserved to document the damage historically.

I see no reason why, say, RSA-64 (RSA with 64bit integers) need to be prevented
or blocked even though factorization can be done quickly. I also see no reason
to attempt looking for small primes: its not the job of SKS key servers to
ensure the quality of algorithmic parameters (and 10000 primes doesn't
begin to help with that QA).

73 de Jeff




reply via email to

[Prev in Thread] Current Thread [Next in Thread]