sks-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Sks-devel] IPv6 peering; keydumps annoyingly large


From: Scott Grayban
Subject: Re: [Sks-devel] IPv6 peering; keydumps annoyingly large
Date: Wed, 01 Jun 2011 14:33:14 -0700
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-GB; rv:1.8.1.23) Gecko/20090812 Lightning/0.9.4-Inverse Thunderbird/2.0.0.23 Mnenhy/0.7.5.0

I agree with Xian.

The current key saving method is bulky and just so you guys can get a
idea on growth last year the dump was 3GB - as of today the dump is over
4GB so that is at least a 1GB increase -- so if that is a sign of usage
in 5 to 10 years of growth the DB will become unmanageable -- in fact 1
step to get into the pool is not going to work that well --> import a
current dump of the DB.

We need to have a function in place to remove dead keys -- especially
the expired ones.... if revoked ones should need to stay we should limit
its time in the DB.

Regards,
Scott Grayban

 /"\
 \ /     ASCII RIBBON
  X        FIGHT BREAST CANCER
 / \


Xian Stannard said the following on 06/01/2011 10:14 AM:
> OK. I think I missed some design decisions here. I'm asking questions
> here not because I think particularly think we should go down these
> routes but because I'm interested in why vs. why not.
>
> On 01/06/2011 14:39, Robert J. Hansen wrote:
> > you've just added to the keyserver network a way to delete keys and
> > keep them from getting re-entered into the DB.
> > This is exactly what the keyserver network is meant to avoid.
> I can see that it is bad to loose keys that are in use, but why must
> every key from day zero be kept? The deletion need not be probibitive of
> the key being uploaded again: that could trigger it to be re-propagated.
>
> On 01/06/2011 15:47, John Clizbe wrote:
> > The idea of subsetting keys to different servers completely breaks
> > what makes SKS so great - the FAST reconciliation of differences
> > between two sets of data (servers).
> If the complete set were to be split into clearly defined subsets,
> couldn't the fast set reconciliation could occur between these subsets
> just as quickly. Servers could carry multiple subsets to make sure that
> no particular subset lacked in redundancy? Could the current servers be
> thought of as holding all subsets?
>
> I'm guessing that one of the design aims is that the network of servers
> needs to be redundant enough so that it is very hard to kill enough of
> them to start losing access to keys.
>

_______________________________________________
Sks-devel mailing list
address@hidden
https://lists.nongnu.org/mailman/listinfo/sks-devel





reply via email to

[Prev in Thread] Current Thread [Next in Thread]