[Top][All Lists]

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-wget] Feature: Concurrency in recursive downloads

From: thulla
Subject: Re: [Bug-wget] Feature: Concurrency in recursive downloads
Date: Mon, 3 Aug 2009 23:52:24 -0700

Thanks for the pointer. I haven't looked at mulk, though glancing through
it, it definitely seems interesting but it is missing several of the options
that I find useful in wget (--convert-links, --exclude-domains, --reject,
and others). I'll play with it anyways and see where it takes me.

How much work is it though to build this parallelism into wget? If I had to,
would I be better off hacking in features I need into mulk, or adding
parallelism to wget?


On Mon, Aug 3, 2009 at 8:43 PM, Anthony Bryan <address@hidden>wrote:

> On Mon, Aug 3, 2009 at 7:05 PM, <address@hidden> wrote:
> > I've been using wget to recursively download parts of a web page, and
> would
> > find it very useful if wget allowed for concurrent downloads (upto some
> > max), so that the "queued" URLs can be downloaded using some sort of pool
> of
> > downloaders. I didn't see any discussion on this on the list archives or
> > even on google in general. I'm curious if this is something which has
> been
> > considered since it seems very useful to me in speeding up downloads.
> have you looked at mulk? it might have the features you're looking for
> already.
> http://mulk.sourceforge.net/
> "Multi-connection command line tool for downloading Internet sites
> with image filtering and Metalink support. Similar to wget and cURL,
> but it manages up to 50 simultaneous and parallel links. Main features
> are: HTML code parsing, recursive fetching, Metalink retrieving,
> segmented download and image filtering by width and height."
> --
> (( Anthony Bryan ... Metalink [ http://www.metalinker.org ]
>  )) Easier, More Reliable, Self Healing Downloads

reply via email to

[Prev in Thread] Current Thread [Next in Thread]