monotone-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Monotone-devel] Re: big repositories inconveniences (partial pull?)


From: Lapo Luchini
Subject: [Monotone-devel] Re: big repositories inconveniences (partial pull?)
Date: Wed, 23 Aug 2006 20:47:55 +0200
User-agent: Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US; rv:1.8.0.5) Gecko/20060719 Thunderbird/1.5.0.5 Mnenhy/0.7.4.0 Hamster/2.0.0.1

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Christof Petig wrote:
> Incremental cvs import is what cvssync provides.
I know: I used to use it for a project of mine, it was a bit slow but
definitely useful.
Main problem is that that project of mine was a 9.3 MiB project and a
cvs_sync did take around 1 hour... I don't want to know how much it
would take on FreeBSD's CVSROOT, whose "cvs_import" take the better
part of a day ^_^
(I'm talking of the status of cvs_sync of some months ago, it's been a
while I didn't happen to need it)

> cvssync and cvs_import do not mix (yet) since cvs_import does not
> remember the corresponding cvs revisions for each file.
I think adding a certificate with the corresponding CVS revision would
be a *good thing* whether one wishes to update the import or not: it
probably occupies very little in the DB and helps reply to questions
like "what revision contains file xyz version 1.254?".
I guess the main problem is that certificates are per-revision and not
per-file, so yet another "blob" of data to be parsed would be needed
(with similar speed issues as "annotate" has, I guess).

> cvssync is under complete redesign at the moment. It got _much_
> faster and cleaner in this process but does not fully work, yet
> (Initial import already works). Stay tuned. [If you only need
> incremental import that should be possible within this week, again]
>
I look forward trying it as soon as you give a thumb-up ;-)

> cvssync was never tested against huge repositories. Ease of design
> and bandwidth efficiency were design guidelines not memory
> efficiency.
On such a huge import doing it locally it's quite the only way to do a
full import, I guess, so bandwidth wouldn't be a problem, there are
plenty other sources of problems though. 0=)

What about the following sentence in wiki?
"There is an important limitation, though. This method doesn't
presently try to attach branches to their parents, either on the
mainline or on other branches, instead each CVS branch gets its own
separate linear history in the resulting monotone db."
Does it still apply to cvs_import? Does it apply to cvs_sync (and/or
will apply to your "complete redesign"?)

What I'm looking for is mainly a way to fully import a locally
available CVSROOT in a monotone database most closely as possible as
the original data and keeping it in sync one-way, two-way sync or
remote sync is only a plus.

- --
Lapo Luchini
address@hidden (OpenPGP & X.509)
www.lapo.it (Jabber, ICQ, MSN)
-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.2.1 (Cygwin)
Comment: Using GnuPG with Mozilla - http://enigmail.mozdev.org

iQIcBAEBAgAGBQJE7KLbAAoJELBiMTth2oCDwBYQAJgoCqO6jEBPXIVuoqYD5gTS
CF0Z6hZLNNgsKfkDyIe4qKFb0Fvo0BLXdW9iq08dONfSr7BwGe2zXsPnuRuuxval
DFyE3DzRanXd408jIPQxXhNY9CtGlk8lgkqPwcNVMgH8Y96VJoUG7ZwkqWqETbHO
ciFlcksooskBDmSkJWxKzLbcccpkC9hhxnJINo30F4f0VMLw8e6m3agkPF1MLJrh
iHJcbBBtkw/bh9ILZolJiab1OLj5x/+JJn6I17Tg8KJkw0B8oyanARmQS48rmAdD
T+1nCSou6DyIkckNeEH+gGZ1FlvpwSVgcmnsWyUtWpcFLnZwhr239EzNkUQrysSB
XvC/pBXHQYVLKk2LnBZNRYE+xeM1giaZcCkvUNBd4ohkL4pEaqeI8agBs4badACY
VFk3SnKCDZMxJwXuG/pkasWj7rheNt7j006/5h2kHulSq5U4H6pUr69NEBWyKsT4
LKhefQh2tG2RrcSD/qVpNalkR553zJjhh1L6yqW6UnYfhtrvjEeZCoKzkffZnCJG
Oc1QYvJGOJir8ToECuEY7qiYeS14BSXYN83zuGdyyndZ5x5FIk5bxSXqkxpG0cxA
uRMESVbAZzvJzCVJbhwCrff1rYaXJmD4c3s2Ze4sdRKD9ajgnQJ+mCIS2kzqDu2y
B3Q10KKyW+1E6yGN4J0a
=EftX
-----END PGP SIGNATURE-----





reply via email to

[Prev in Thread] Current Thread [Next in Thread]