[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Processing a big file using more CPUs
From: |
Shlomi Fish |
Subject: |
Re: Processing a big file using more CPUs |
Date: |
Tue, 12 Feb 2019 12:39:24 +0200 |
On Mon, 11 Feb 2019 23:54:43 +0100
Ole Tange <ole@tange.dk> wrote:
> On Mon, Feb 4, 2019 at 10:19 PM Nio Wiklund <nio.wiklund@gmail.com> wrote:
> :
> > cat bigfile | parallel --pipe --recend '' -k gzip -9 > bigfile.gz
> :
> > The reason why I want this is that I often create compressed images of
> > the content of a drive, /dev/sdx, and I lose approximately half the
> > compression improvement from gzip to xz, when using parallel. The
> > improvement in speed is good, 2.5 times, but I think larger blocks would
> > give xz a chance to get a compression much closer to what it can get
> > without parallel.
> >
> > Is it possible with with the current code? In that case how?
>
> Since version 2016-07-22:
>
> parallel --pipepart -a bigfile --recend '' -k --block -1 xz > bigfile.xz
> parallel --pipepart -a /dev/sdx --recend '' -k --block -1 xz > bigfile.xz
>
> Unfortunately the size computation of block devices only works under
> GNU/Linux.
>
> (That said: pxz exists, and it may be more relevant to use here).
>
Hi Ole!
https://jnovy.fedorapeople.org/pxz/node2.html - I see, but note that xz has a
-T flag now as well - https://linux.die.net/man/1/xz .
>
> /Ole
>
--
-----------------------------------------------------------------
Shlomi Fish http://www.shlomifish.org/
NSA Factoids - http://www.shlomifish.org/humour/bits/facts/NSA/
Chuck Norris’ ciphers were once broken. He responded by breaking those
individuals. (By sevvie: http://sevvie.github.io/ .)
— http://www.shlomifish.org/humour/bits/facts/Chuck-Norris/
Please reply to list if it's a mailing list post - http://shlom.in/reply .