espressomd-maintainer
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

[Espressomd-maintainer] Build failed in Jenkins: master-multiconfig-comp


From: Jenkins Demon
Subject: [Espressomd-maintainer] Build failed in Jenkins: master-multiconfig-compile » immersed_boundaries #109
Date: Tue, 25 Oct 2016 09:46:37 +0200 (CEST)

See 
<http://espressomd.org/jenkins/job/master-multiconfig-compile/myconfig=immersed_boundaries/109/changes>

Changes:

[richter] updated 01 lennard jones tutorial, spell check

[richter] added missing file

[richter] updated code display

[richter] fix for: Not clear, what exactly polymer observables are doing #877

[github] specified bond vectors explicitly

------------------------------------------
[...truncated 387 lines...]
* will be replaced by cmake in the future.                     *
* New features are only available via cmake.                   *
****************************************************************
END CONFIGURE
<http://espressomd.org/jenkins/job/master-multiconfig-compile/myconfig=immersed_boundaries/ws/>
+ maintainer/jenkins/build.sh
26208 (process ID) old priority 0, new priority 5
  
srcdir=<http://espressomd.org/jenkins/job/master-multiconfig-compile/myconfig=immersed_boundaries/ws/>
  
builddir=<http://espressomd.org/jenkins/job/master-multiconfig-compile/myconfig=immersed_boundaries/ws/>
  insource=true
START BUILD
  myconfig=immersed_boundaries
  build_procs=4
<http://espressomd.org/jenkins/job/master-multiconfig-compile/myconfig=immersed_boundaries/ws/>
 
<http://espressomd.org/jenkins/job/master-multiconfig-compile/myconfig=immersed_boundaries/ws/>
Copying immersed_boundaries.hpp to 
<http://espressomd.org/jenkins/job/master-multiconfig-compile/myconfig=immersed_boundaries/ws/myconfig.hpp...>
>make -j 4 
Making all in config
make[1]: Entering directory 
'<http://espressomd.org/jenkins/job/master-multiconfig-compile/myconfig=immersed_boundaries/ws/config'>
make[1]: Nothing to be done for 'all'.
make[1]: Leaving directory 
'<http://espressomd.org/jenkins/job/master-multiconfig-compile/myconfig=immersed_boundaries/ws/config'>
Making all in src
make[1]: Entering directory 
'<http://espressomd.org/jenkins/job/master-multiconfig-compile/myconfig=immersed_boundaries/ws/src'>
make  all-recursive
make[2]: Entering directory 
'<http://espressomd.org/jenkins/job/master-multiconfig-compile/myconfig=immersed_boundaries/ws/src'>
Making all in core
make[3]: Entering directory 
'<http://espressomd.org/jenkins/job/master-multiconfig-compile/myconfig=immersed_boundaries/ws/src/core'>
  GEN      myconfig-final.hpp <= ../../myconfig.hpp
  GEN      config-features.cpp
  GEN      config-features.hpp
make  all-recursive
make[4]: Entering directory 
'<http://espressomd.org/jenkins/job/master-multiconfig-compile/myconfig=immersed_boundaries/ws/src/core'>
make[5]: Entering directory 
'<http://espressomd.org/jenkins/job/master-multiconfig-compile/myconfig=immersed_boundaries/ws/src/core'>
  CXX      config-features.lo
  CXX      cells.lo
  CXX      collision.lo
  CXX      communication.lo
  CXX      comfixed.lo
  CXX      comforce.lo
  CXX      constraint.lo
  CXX      cuda_interface.lo
  CXX      cuda_init.lo
  CXX      debug.lo
  CXX      domain_decomposition.lo
  CXX      electrokinetics_pdb_parse.lo
  CXX      energy.lo
  CXX      external_potential.lo
  CXX      errorhandling.lo
  CXX      fft.lo
  CXX      fft-common.lo
  CXX      fft-dipolar.lo
  CXX      forcecap.lo
  CXX      forces.lo
  CXX      galilei.lo
  CXX      ghosts.lo
  CXX      global.lo
  CXX      grid.lo
  CXX      halo.lo
  CXX      iccp3m.lo
  CXX      imd.lo
  CXX      initialize.lo
  CXX      integrate.lo
  CXX      interaction_data.lo
  CXX      lattice.lo
  CXX      layered.lo
  CXX      lb.lo
  CXX      lb-boundaries.lo
  CXX      lbgpu.lo
lb.cpp: In function 'int lb_lbfluid_print_vtk_velocity(char*, std::vector<int>, 
std::vector<int>)':
lb.cpp:767:39: warning: narrowing conversion of 
'(lbpar_gpu.LB_parameters_gpu::dim_x + 4294967295u)' from 'unsigned int' to 
'int' inside { } [-Wnarrowing]
             bb_high = {lbpar_gpu.dim_x-1, lbpar_gpu.dim_y-1, 
lbpar_gpu.dim_z-1};
                                       ^
lb.cpp:767:39: warning: narrowing conversion of 
'(lbpar_gpu.LB_parameters_gpu::dim_x + 4294967295u)' from 'unsigned int' to 
'int' inside { } [-Wnarrowing]
lb.cpp:767:58: warning: narrowing conversion of 
'(lbpar_gpu.LB_parameters_gpu::dim_y + 4294967295u)' from 'unsigned int' to 
'int' inside { } [-Wnarrowing]
             bb_high = {lbpar_gpu.dim_x-1, lbpar_gpu.dim_y-1, 
lbpar_gpu.dim_z-1};
                                                          ^
lb.cpp:767:58: warning: narrowing conversion of 
'(lbpar_gpu.LB_parameters_gpu::dim_y + 4294967295u)' from 'unsigned int' to 
'int' inside { } [-Wnarrowing]
lb.cpp:767:77: warning: narrowing conversion of 
'(lbpar_gpu.LB_parameters_gpu::dim_z + 4294967295u)' from 'unsigned int' to 
'int' inside { } [-Wnarrowing]
             bb_high = {lbpar_gpu.dim_x-1, lbpar_gpu.dim_y-1, 
lbpar_gpu.dim_z-1};
                                                                             ^
lb.cpp:767:77: warning: narrowing conversion of 
'(lbpar_gpu.LB_parameters_gpu::dim_z + 4294967295u)' from 'unsigned int' to 
'int' inside { } [-Wnarrowing]
  CXX      lees_edwards.lo
  CXX      lees_edwards_domain_decomposition.lo
  CXX      lees_edwards_comms_manager.lo
  CXX      metadynamics.lo
  CXX      minimize_energy.lo
  CXX      modes.lo
  CXX      molforces.lo
  CXX      mol_cut.lo
  CXX      nemd.lo
  CXX      npt.lo
  CXX      nsquare.lo
  CXX      particle_data.lo
  CXX      polymer.lo
  CXX      polynom.lo
  CXX      pressure.lo
  CXX      random.lo
  CXX      rattle.lo
  CXX      reaction.lo
  CXX      readpdb.lo
  CXX      Ringbuffer.lo
  CXX      rotate_system.lo
  CXX      rotation.lo
  CXX      RuntimeError.lo
  CXX      RuntimeErrorCollector.lo
  CXX      RuntimeErrorStream.lo
  CXX      scafacos.lo
  CXX      specfunc.lo
  CXX      statistics.lo
  CXX      statistics_chain.lo
  CXX      statistics_cluster.lo
  CXX      statistics_correlation.lo
  CXX      statistics_fluid.lo
  CXX      statistics_molecule.lo
  CXX      statistics_observable.lo
  CXX      statistics_wallstuff.lo
  CXX      thermostat.lo
  CXX      topology.lo
  CXX      tuning.lo
  CXX      utils.lo
  CXX      uwerr.lo
  CXX      verlet.lo
  CXX      virtual_sites.lo
  CXX      virtual_sites_com.lo
  CXX      virtual_sites_relative.lo
  CXX      vmdsock.lo
  CXX      ghmc.lo
  CXX      Vector.lo
  CXX      SystemInterface.lo
  CXX      integrate_sd.lo
  CXX      EspressoSystemInterface.lo
  CXX      PdbParser.lo
  CXX      mpiio.lo
  CXX      MpiCallbacks.lo
  CXX      bmhtf-nacl.lo
  CXX      buckingham.lo
  CXX      cos2.lo
  CXX      dpd.lo
  CXX      gaussian.lo
  CXX      gb.lo
  CXX      hat.lo
  CXX      hertzian.lo
  CXX      lj.lo
  CXX      ljangle.lo
  CXX      ljcos.lo
  CXX      ljcos2.lo
  CXX      ljgen.lo
  CXX      morse.lo
  CXX      soft_sphere.lo
  CXX      steppot.lo
  CXX      tab.lo
  CXX      tunable_slip.lo
  CXX      angle.lo
  CXX      angle_harmonic.lo
  CXX      angle_cosine.lo
  CXX      angle_cossquare.lo
  CXX      angledist.lo
  CXX      dihedral.lo
  CXX      endangledist.lo
  CXX      fene.lo
  CXX      harmonic_dumbbell.lo
  CXX      harmonic.lo
  CXX      quartic.lo
  CXX      overlap.lo
  CXX      umbrella.lo
  CXX      bonded_coulomb.lo
  CXX      subt_lj.lo
  CXX      object-in-fluid/oif_global_forces.lo
  CXX      object-in-fluid/oif_local_forces.lo
  CXX      object-in-fluid/out_direction.lo
  CXX      hydrogen_bond.lo
  CXX      twist_stack.lo
  CXX      debye_hueckel.lo
  CXX      elc.lo
  CXX      magnetic_non_p3m_methods.lo
  CXX      mdlc_correction.lo
  CXX      maggs.lo
  CXX      mmm1d.lo
  CXX      mmm2d.lo
  CXX      mmm-common.lo
  CXX      p3m.lo
  CXX      p3m-common.lo
  CXX      p3m-dipolar.lo
  CXX      reaction_field.lo
  GEN      config-version.cpp
  NVCC     cuda_init_cuda.lo
  NVCC     cuda_common_cuda.lo
  NVCC     electrokinetics_cuda.lo
  NVCC     fd-electrostatics_cuda.lo
  NVCC     lbgpu_cuda.lo
  NVCC     p3m_gpu_cuda.lo
  NVCC     EspressoSystemInterface_cuda.lo
  NVCC     integrate_sd_cuda.lo
  NVCC     integrate_sd_cuda_kernel.lo
  NVCC     integrate_sd_cuda_debug.lo
  NVCC     integrate_sd_matrix.lo
  NVCC     integrate_sd_cuda_device.lo
  NVCC     actor/Mmm1dgpuForce_cuda.lo
  NVCC     actor/EwaldGPU_cuda.lo
  NVCC     actor/HarmonicWell_cuda.lo
  NVCC     immersed_boundary/ibm_cuda.lo
  NVCC     actor/DipolarDirectSum_cuda.lo
  NVCC     actor/HarmonicOrientationWell_cuda.lo
  NVCC     p3m_gpu_error_cuda.lo
  CXX      object-in-fluid/affinity.lo
  CXX      object-in-fluid/membrane_collision.lo
  CXX      immersed_boundary/ibm_main.lo
  CXX      immersed_boundary/ibm_triel.lo
  CXX      immersed_boundary/ibm_volume_conservation.lo
  CXX      immersed_boundary/ibm_tribend.lo
  CXX      immersed_boundary/ibm_cuda_interface.lo
  CXX      actor/ActorList.lo
  CXX      actor/HarmonicWell.lo
immersed_boundary/ibm_cuda_interface.cpp: In function 'void 
IBM_cuda_mpi_get_particles()':
immersed_boundary/ibm_cuda_interface.cpp:94:62: error: 'COMM_TRACE' was not 
declared in this scope
   COMM_TRACE(fprintf(stderr, "%d: finished get\n", this_node));
                                                              ^
immersed_boundary/ibm_cuda_interface.cpp: In function 'void 
IBM_cuda_mpi_get_particles_slave()':
immersed_boundary/ibm_cuda_interface.cpp:109:91: error: 'COMM_TRACE' was not 
declared in this scope
   COMM_TRACE(fprintf(stderr, "%d: get_particles_slave, %d particles\n", 
this_node, n_part));
                                                                                
           ^
immersed_boundary/ibm_cuda_interface.cpp: In function 'void 
IBM_cuda_mpi_send_velocities()':
immersed_boundary/ibm_cuda_interface.cpp:203:63: error: 'COMM_TRACE' was not 
declared in this scope
   COMM_TRACE(fprintf(stderr, "%d: finished send\n", this_node));
                                                               ^
immersed_boundary/ibm_cuda_interface.cpp: In function 'void 
IBM_cuda_mpi_send_velocities_slave()':
immersed_boundary/ibm_cuda_interface.cpp:217:92: error: 'COMM_TRACE' was not 
declared in this scope
   COMM_TRACE(fprintf(stderr, "%d: send_particles_slave, %d particles\n", 
this_node, n_part));
                                                                                
            ^
  CXX      actor/HarmonicOrientationWell.lo
Makefile:1019: recipe for target 'immersed_boundary/ibm_cuda_interface.lo' 
failed
make[5]: *** [immersed_boundary/ibm_cuda_interface.lo] Error 1
make[5]: *** Waiting for unfinished jobs....
make[5]: Leaving directory 
'<http://espressomd.org/jenkins/job/master-multiconfig-compile/myconfig=immersed_boundaries/ws/src/core'>
Makefile:1042: recipe for target 'all-recursive' failed
make[4]: *** [all-recursive] Error 1
make[4]: Leaving directory 
'<http://espressomd.org/jenkins/job/master-multiconfig-compile/myconfig=immersed_boundaries/ws/src/core'>
Makefile:711: recipe for target 'all' failed
make[3]: *** [all] Error 2
make[3]: Leaving directory 
'<http://espressomd.org/jenkins/job/master-multiconfig-compile/myconfig=immersed_boundaries/ws/src/core'>
Makefile:455: recipe for target 'all-recursive' failed
make[2]: *** [all-recursive] Error 1
make[2]: Leaving directory 
'<http://espressomd.org/jenkins/job/master-multiconfig-compile/myconfig=immersed_boundaries/ws/src'>
Makefile:393: recipe for target 'all' failed
make[1]: *** [all] Error 2
make[1]: Leaving directory 
'<http://espressomd.org/jenkins/job/master-multiconfig-compile/myconfig=immersed_boundaries/ws/src'>
Makefile:481: recipe for target 'all-recursive' failed
make: *** [all-recursive] Error 1
+ exit 1
Build step 'Execute shell' marked build as failure



reply via email to

[Prev in Thread] Current Thread [Next in Thread]