bug-wget
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-wget] Feature question ...


From: Micah Cowan
Subject: Re: [Bug-wget] Feature question ...
Date: Thu, 19 Apr 2012 08:38:37 -0700
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:11.0) Gecko/20120302 Thunderbird/11.0

Probably in combination with -nd (no directories), -k (convert links)
and -E (adjust filename extensions).

-mjc

On 04/19/2012 08:27 AM, Tony Lewis wrote:
> You're looking for:
>   --page-requisites    get all images, etc. needed to display HTML page.
> 
> wget URL --page-requisites
> 
> should give you what you need.
> -----Original Message-----
> From: address@hidden
> [mailto:address@hidden On Behalf Of Garry
> Sent: Thursday, April 19, 2012 2:45 AM
> To: address@hidden
> Subject: [Bug-wget] Feature question ...
> 
> Hi,
> 
> not exactly a bug question/report, but something I was trying to get done
> with wget but have either overlooked in the docs, misread or it's not plain
> not possible at the moment ...
> 
> I'm trying to mirror a full web page, but with some restrictions ... I need
> a single page (either the full path, or - if it's the main page in a
> directory - just that index page) to be downloaded, with all contained media
> (at least images, css, js-includes etc.), even if that media/files are not
> stored on that server. As I need the information for archival purposes, I do
> not want a full tree of directories rebuilt, as wget would normally do. All
> the files should be downloaded and stored in some unique file names in the
> same directory as the page file, and of course the html page should be
> re-coded as to use the relative path to those renamed files.
> 
> Can this be done with wget? Or if not, is there a different program
> (Linux) that will do this?
> 
> Tnx, Garry
> 




reply via email to

[Prev in Thread] Current Thread [Next in Thread]