monotone-devel
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Monotone-devel] Re: monotone CVS import failed.


From: Michael Haggerty
Subject: Re: [Monotone-devel] Re: monotone CVS import failed.
Date: Sat, 28 Oct 2006 13:47:10 +0200
User-agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.8.0.7) Gecko/20060922 Thunderbird/1.5.0.7 Mnenhy/0.7.4.666

Markus Schiltknecht wrote:
> Jon Smirl wrote:
>> I have been trying with cvs2svn for three months now and progress
>> isn't happening. You at least seem interested in making things work
>> right for Mozilla.
> 
> I don't think so. The graph algorithm is just very new and we still have
> to experiment with it. I don't know about the details why cvs2svn (graph
> based) fails with importing Mozilla, though.

For the record, cvs2svn works *fine* with the mozilla repository.  I've
converted it multiple times without problems.

What doesn't work us using cvs2svn *with Jon Smirl's changes* to support
conversion from CVS *to git*.  This was never a design goal of cvs2svn.
 We (meaning I, since I am for all intents and purposes the only cvs2svn
developer right now) would like to support him in making this work, but
I haven't had much time recently to work on cvs2svn.

IIUC, the specific problem is that git is missing some features that CVS
has, like the ability to create tags based on multiple branches.
Whether this is a worthwhile feature or not is debatable, but CVS and
Subversion can both do it.  So it is obviously more work to convert a
CVS repository to git than to Subversion.

(Will you also have this problem when converting to Monotone?)

>>> Did you watch memory consumption?
>>
>> Around 1.2GB when it died.
> 
> That's good to know. IMHO it's the main difference between my monotone
> cvs_import rewrite and cvs2svn's graph based approach: the 'in-memory'
> vs. 'on-disk' issue.
> 
> It convinces me that spilling to disk is not necessary, because it looks
> like the whole mozilla repository with all its blobs and its
> dependencies fits into 1.2 GB of memory (this is of course excluding the
> files and its deltas itself).

Is the monotone conversion done in C/C++ or a scripting language?
Because I think the Python object overhead would make an in-core
conversion too expensive for the largest archives, at least without
packing in-core information into strings in binary format or something.
 The on-disk databases and multiple passes also give us resumability of
a partial conversion.  But I readily admit that the on-disk + pass
structure of our conversions is a lot of work to support and extremely
expensive in terms of conversion time.

Michael




reply via email to

[Prev in Thread] Current Thread [Next in Thread]