[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: Octave project involvement
From: |
David Bateman |
Subject: |
Re: Octave project involvement |
Date: |
Fri, 07 Sep 2007 10:41:03 +0200 |
User-agent: |
Thunderbird 1.5.0.7 (X11/20060921) |
John W. Eaton wrote:
> Using repmat for this job is OK but can use up a lot of memory
> unnecessarily.
>
Yes that is why matlab introduced the bsxfun, to avoid the issue of
additional memory use. The disadvantage is that for each column of the
matrix it needs to do an feval which can be a bit slow.. An example
relevant to this discussion
n = 1000;
A = randn(1000,1000);
B = randn(1000,1);
t0 = cputime();
C = A .* repmat(B, 1, n);
cputime() - t0
t0 = cputime();
D = bsxfun (@(x,y) x .* y, A, B);
cputime() - t0
with the bsxfun function in the CVS the above returns
ans = 0.14298
ans = 0.20297
so the bsxfun function is slightly slower than the repmat, but doesn't
use the additional memory. The reason bsxfun is slower than it needs to
be is that it can make no assumption that the matrix type returned from
the feval will be the same at each call, whereas in the case of ".*" it
will be if the type of the matrices A and B above are known. Therefore
an overloading of the operator (or a specialized function that does this
M-V operation) should have a speed more like that given by
n = 1000;
A = randn(1000,1000);
B = randn(1000,1);
B2 = repmat (B, 1, n);
t0 = cputime();
C = A .* B2;
cputime() - t0
ans = 0.075989
So yes there is something to gain here.
Regards
David
--
David Bateman address@hidden
Motorola Labs - Paris +33 1 69 35 48 04 (Ph)
Parc Les Algorithmes, Commune de St Aubin +33 6 72 01 06 33 (Mob)
91193 Gif-Sur-Yvette FRANCE +33 1 69 35 77 01 (Fax)
The information contained in this communication has been classified as:
[x] General Business Information
[ ] Motorola Internal Use Only
[ ] Motorola Confidential Proprietary