[fpc-devel] Parallel Computing
ppopov at tamu.edu
Wed Nov 5 00:18:23 CET 2008
> As much as I do appreciate your comments, I really can't believe that
> the GNU GCC community took the pain to fully integrate OpenMP that
> deeply in their system that it can be used by just issuing a single
> #pragma line, if there would be no benefit to be expected.
Actually, most people who are doing parallel computing for scientific
purposes (the vast majority of prallel programs written so far) use MPI
extensions are never too popular as far as shared memory
machines/clusters/supercomputers are involved. The main reason is that
such equipment is usally unique and the manufacturer typically provides
only FORTRAN/C compiler, sometimes C++, and always an optimized MPI. There
is virtually no chance that GCC would run well on a custom supercomputer.
Moreover, parallel programmig is quite difficult, so usually there is no
time to explore new language concepts.
Let me give you a flavour of how concurrency is implemented in ADA, using
local_task1 = task; // Declaration of parallel task
local_task2 = task(x: integer); // Declaration of parallel task
// Implementation of first local task
// Implementation of second local task
task local_task2(x: integer);
// At the start of DoSometing, four processes run concurrently:
// DoSomething, 1 copy of local_task1 (A) and two copies of local_task2
// DoSomething exits only after both A and B(01) and B(20) all are done
The difference between a normal local procedure and a parallel task is
that tasks are started at declaration time. Up to this point, the
semantics is tha same as in ADA. Passing parameters to a task is done at
declaration or by messages
However, one can instead define a pointer to a task which is no longer
plocal_task2 = ^local_task2;
and then, start a new task at any time by doing:
Dispose(pB); // wait for pB to terminate
In this way tasks can be made equiavalent to local procedures/functions.
The idea is to encapsulate TThread. The advantage is that the following
programming paradigm is significanlty easire to code:
We have a class with method DoSomething. DoSomething executes a series of
complex (higher level) parallel tasks. These tasks need input from
DoSomething and need access to its local variables (global with respect to
each task). Whatever the local tasks do, eventually DoSomething patches
the results and returns to the caller.
The main advantage is the the task will have access not only to
DoSomething's local declarations but also to all the methods, fields and
properties of TSomeClass. With threads, the descendant of TTHread which
implements the task needs a pointer to TSomeClass in order to have access.
This means additional work in constructors, etc and it is not clean.
Relatively simple algortithms get spread along several methods which is
I will try to refresh further my memories of ADA's parallel constructs and
try to see if it could be useful.
I would also like to mention one language particularly good for
distributed heterogeneous networks (clouds, as it is called these days):
More information about the fpc-devel