|
From: | Markus Bergholz |
Subject: | Re: Import large field-delimited file with strings and numbers |
Date: | Mon, 8 Sep 2014 22:14:18 +0200 |
On 09/08/2014 08:27 PM, Markus Bergholz wrote:
Hi. As I said above, I already solved the problem (with your help).Bottom line: I think it has to do with the way Octave allocates memory to cells, which is not very efficient (as opposed to dense or sparse numerical data, which it handles very well).
I managed to solve the problem I had, thanks to the help of you guys.
However, I think it would probably be nice if in future versions of Octave there was something akin to ulimit installed by default to prevent a process from eating up all available memory.
If someone wants to check this issue the data I am working with is public:
http://www.bls.gov/cew/data/files/*/csv/*_annual_singlefile.zip
where * = 1990:2013
nvm, got it.
which columns do you need?
I just put the link so that someone interested can check the memory overload problem.
(But the data I need to extract is in columns 1-3 and 8-11.)
Many thanks
j
[Prev in Thread] | Current Thread | [Next in Thread] |