[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
[ESPResSo-devel] CUDA
From: |
Axel Arnold |
Subject: |
[ESPResSo-devel] CUDA |
Date: |
Mon, 16 Sep 2013 18:28:33 +0200 |
User-agent: |
Mozilla/5.0 (X11; Linux x86_64; rv:17.0) Gecko/20130801 Thunderbird/17.0.8 |
Hi all,
since the CUDA part was somewhat messed up, here some background on
CUDA/MPI.
MPI code is compiled by a compiler wrapper, typically mpicc or so. CUDA
code in turn is compiled using nvcc, which for the host part is a
wrapper around gcc. On many systems, that will be a slightly older gcc
version than the default compiler. The header fiel cuda.h is targeted at
this older gcc and therefore might simply not work with the default gcc.
Therefore, mpicc and nvcc should be regarded as incompatible.
The solution is to split the CUDA and MPI code, so that they can be
compiled by separate compilers. That are the .cpp and _cuda.cu files.
However, equally important is to keep the header files clean of MPI/CUDA
related things. For historic reasons, most ESPResSo headers do include
mpi.h. Therefore, CUDA code should usually NOT include other ESPResSo
headers if these were not checked to be MPI-free. On the other hand, the
headers should NOT contain any #include <cuda.h> if they are included
outside .cu-files. Usually, you don't need such headers, with exception
of cuda_utils.hpp, which is a CUDA helper file and only meant to be
included in .cu-files.
As a rule of thumb, a CUDA-accelerated algorithm now has three files:
algo.cpp for the MPI-related implementation. algo_cuda.cpp for the
CUDA-related implementation and algo.hpp which is both CUDA- and
MPI-free, and just declares exported functions and maybe common data types.
Cheers,
Axel
--
JP Dr. Axel Arnold
ICP, Universität Stuttgart
Allmandring 3
70569 Stuttgart, Germany
Email: address@hidden
Tel: +49 711 685 67609
[Prev in Thread] |
Current Thread |
[Next in Thread] |
- [ESPResSo-devel] CUDA,
Axel Arnold <=