swarm-modeling
[Top][All Lists]
Advanced

[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [Swarm-Modelling] Floating point arithmetic


From: Marcus G. Daniels
Subject: Re: [Swarm-Modelling] Floating point arithmetic
Date: Sun, 01 May 2005 18:40:45 -0600
User-agent: Mozilla Thunderbird 0.9 (Windows/20041103)

Russell Standish wrote:

C# and Java were designed for a completely different application, the
web applet. Here performance is not necessary, but runtime portability
is.

While C# was intended to take Java market share away, and browser embedding is clearly a goal of Java I think it would be wrong to say that .NET is not a priority for Microsoft (where C# is one hosted language just as is C++). Sun is now also using the idea of runtime profiling and recalibrated JIT code generation in their HPCS effort. I see the two worlds (ahead of time compilation and just in time compiation) coming together, as the crucial issue for future performance of complex codes will have as much to do with effective dynamic code partitioning over independent compute engines (finding parallelism) and identification of runtime bottlenecks like out-of-cache memory access. Expensive ahead-of-time analysis and mapping of user code to the CPU architecture(s) in a system will still be important, but since dynamics of programs can change over time (e.g. as in an agent based simulation), it will also be useful to have system support to do things like adaptively arrange the access pattern over a working set to fit in cache.

1) I learnt C++ in 1993/4, a decade after the language came out. At
  that time, it was completely unknown in computational science. Even
  C was considered a novelty at the time, barely registering in a
  world of Fortran77.

In contrast, Java took off relatively quickly.. If a commercially supported computer language provides a big productivity boost (which shouldn't be hard for a community that tolerates crude tools like MPI), I'm optimistic that things could change more quickly. These languages aren't big conceptual jumps, like Haskell. They are evolutionary changes designed to make parallelism more obvious by choosing some useful abstractions for data. I think the extent to which these abstractions 1) improve the transparency of code to scientists while 2) helping the compiler and runtime saturate processors in a big system will be the main factors that determine how successful these HPCS projects are.

Today's development tools for multiprocessing are pretty weak. That (good) extensions like OpenMP have to be grafted on as pragmas shows that languages like C++ don't naturally lend themselves to describing and exploiting concurrency. So it's cool to me to see real money getting spent on R&D that could actually impact scientific computing in the medium term. If the output of that ends up being more ||ism pragmas, clever templates & datastructures, dynamic runtime support etc. for C++/STL or Fortran 20XX, that's great too..


reply via email to

[Prev in Thread] Current Thread [Next in Thread]