help-glpk
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

RE: [Help-glpk] GLPK re-endrant


From: Michael Hennebry
Subject: RE: [Help-glpk] GLPK re-endrant
Date: Mon, 6 Jul 2009 10:12:51 -0500 (CDT)
User-agent: Alpine 1.00 (DEB 882 2007-12-20)

On Thu, 2 Jul 2009, Rios, Joseph L. (ARC-AFO) wrote:

Just to chime in again...


One can run multiple processes on multi-cored CPUs.
Shared memory isn't necessarily all that great a model,
but is difficult to avoid with threads.
Separate processes with explicit message-passing
can be a lot easier to wrap one's brain around.
The lack of a common address space makes it
hard for them to tromp on each others data.

Yes, you're right. But it is not as efficient as a re-entrant library:
message-passing syscalls comes at a computational cost, not even to mention
process setup.

On a problem that would benefit from threads,
I'd expect process setup to be a small part of the processing time.
Similarly with subproblems that separate well,
I'd expect message-passing to be a small cost.
There would be messages to feed threads problems
and messages to give the master results.
Problems that don't separate well and would require lots of
communication are the ones that might benefit from shared memory.
For those kinds of problems it might be worth
it to wrap ones brain around shared memory.
Beforing going that route, I'd at least look for a new algortihm first.

I'm completely with Giampaolo on this.  Indeed, you can launch new processes
for parallelization, but having the ability to multi-thread is quite
important.  The application for which I had to create the patch launches
1000's of concurrent threads accessing common GLPK data.  That might be too
brutal with processes (I didn't actually try that, nor would I want too).

1000's?  Why?
Unless you had at least 100's of cores, and maybe even then,
I'd expect a smaller number to run faster.

Also, thinking of some functionality or method in GLPK which would greatly
benefit from an easy parallelization would be interesting at least.

--
Michael   address@hidden
"Pessimist: The glass is half empty.
Optimist:   The glass is half full.
Engineer:   The glass is twice as big as it needs to be."




reply via email to

[Prev in Thread] Current Thread [Next in Thread]