bug-wget
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Bug-wget] --header="Accept-encoding: gzip"


From: Ander Juaristi
Subject: Re: [Bug-wget] --header="Accept-encoding: gzip"
Date: Tue, 22 Sep 2015 22:32:54 +0200
User-agent: Mozilla/5.0 (X11; Linux x86_64; rv:31.0) Gecko/20100101 Thunderbird/31.8.0

Hi,

would you tell us what are the exact URLs you are trying with?

If you can't post those URLs in public, either set up a test server, or send 
them to me privately.

I think what you need is the -r/--recursive option but I won't be sure unless I 
see the URLs.

Also, bear in mind that Wget does not support any kind of content-coding. This means that 
if the content comes gzipped, it won't be able to decompress it, as you would expect. So 
passing '--header="Accept-encoding: gzip"' won't probably do what you expected.

On 09/22/2015 07:57 PM, andreas wpv wrote:
All,
I am trying to download all files of a webpage - but compressed, if they
come compressed, and regular if not compressed. Get all the files the way a
browser would.

so, this works for the html file itself:

wget --header="Accept-encoding: gzip" "url"

and this for itself works to download all elements:

wget -p -H "url"

So, now I want these combined:

wget -p -H  --header="Accept-encoding: gzip" "url"

Unfortunately this only pulls the html files (because where I pull them
they are compressed), and not all the other scripts and stylesheets and so
on, although at least a few of these are compressed, either.


Ideas, tips?


Regards,
- AJ



reply via email to

[Prev in Thread] Current Thread [Next in Thread]