|
From: | Pierre SIRON |
Subject: | Re: [certi-dev] Re: CERTI-Devel Digest, Vol 28, Issue 5 |
Date: | Tue, 22 Apr 2008 10:08:37 +0200 |
User-agent: | Thunderbird 1.5 (Windows/20051201) |
Eric Noulard a écrit :
Hello Eric,2008/4/21, Christian Stenzel <address@hidden>:We send this as a single value. The computation of the matrix takes a lot of time, so we would like to compute it in a seperate federate and send the result to subscribing federates.OK I see. For such computation MPI seems more appropriate are porting the code from MPI or do you "usually" have such high volume data exchange? If I'm even more curious could you tell me at which frequency you send the matrix? mon cher voisin de l'autre côté du couloir, I think that the discussion is more open between MPI and HLA. I have directed an internship on the subject of scientific computation with HLA (parallel resolution of a linear system, Kathrin Quince, 2006). We had also the problem of a huge matrix transfer, our solution has been to transmit the matrix by blocks. It was not very efficient, but, after that, the following computations were correct. Why to do scientific computation with HLA ? To avoid a gateway (overhead) between various execution environments (burden of mastering and deploying many tools). Do we have a lot of applications which integrate scientific computation in a distributed event-based simulation ? I am thinking to the simulation of avionics systems which could require more elaborate models of the plane and environment physics. Christian could add more examples here ? Are the HLA services appropriate to write parallel programs ? A first answer, we can write such programs with HLA. Are the data management services appropriate ? We can express point to point communication (a single publisher and a single subscriber). We can express one-to-many communication (data distribution). We can express many-to-one commmunication (reduction operation but without the power of an binary tree communication scheme). The DDM services can be used to indicate the receiver of some data. Are the time management services appropriate ? A general parallel application requires a complex synchronization of tasks. These services can be useful. Even a fork-join mechanism can be easily (but not intituively for a Fortran programmer) written. How can we explain the superiority of MPI for the data transfers ? a) Lower overhead of the MPI layer (latency) ? b) Execution of MPI applications on efficient architectures (processors and networks) ? c) A lot of data transfer optimizations ? (even in the case of MPI above TCP ?) What are the MPI optimizations that cannot be included in a RTI implementation ? In the case of CERTI, we could study the direct connection between RTIA for some objects (with a new transport attribute). Your comments ? Merci et à bientôt, Pierre |
[Prev in Thread] | Current Thread | [Next in Thread] |