[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[Getfem-users] QUterm vs. parallelization
From: |
Igor Peterlik |
Subject: |
[Getfem-users] QUterm vs. parallelization |
Date: |
Fri, 3 Apr 2009 11:29:16 +0200 |
Dear GetFEM team,
I would like to ask about a bit strange behavior of the QUterm brick
being used with GETFEM_PARA_LEVEL=2.
First, I noticed that my application composed of the Helmholtz,
Dirichlet (twice) and QUterm bricks gives
different results in sequential and parallel version (and moreover the
result in parallel version depends on the number of
the processors). I found out that the QUterm brick modifies the entire
system matrix on each process. Maybe I had missed
something but it seemed to me that this was not correct behavior
(maybe QUterm is not compatible with paralel mode...).
As a workaround, I added a condition to the gmm:add as shown below:
if (getfem::MPI_IS_MASTER())
gmm::add(get_K(), gmm::sub_matrix(MS.tangent_matrix(), SUBI));
in the mdbrick_QU_term::do_compute_tangent_matrix().
so the system matrix is modified only once. It seems this helped,
since the solution is now invariant wrt. the number
of CPUs. Nevertheless, this is not the real solution I guess.
BTW is there some (at least) small documentation concerning the paralelization
(which parts are parallelized and how)?
Regards
Igor
- [Getfem-users] QUterm vs. parallelization,
Igor Peterlik <=