[fpc-devel] Parallel Computing
daniel.mantione at freepascal.org
Tue Dec 11 12:13:35 CET 2007
Op Tue, 11 Dec 2007, schreef Florian Klaempfl:
> Mattias Gärtner schrieb:
> > Zitat von Michael Schnell <mschnell at lumino.de>:
> >>> Think about the alternative: It is much harder to implement the same
> >>> parallel loop with TThread. So OpenMP makes parallel loops much easier
> >>> to implement. For me this is the 'Delphi' way: Makes things easy and
> >>> readable.
> >> Of course you are right. In the example of "parallel" loops It's _a_lot_
> >> easier to to use for the programmer. "Lightweight-threaded" stuff like
> >> parallel loops was not the original aim of ThreadEvents. The original
> >> target was a more "standard" use of threads. But it _can_ be used for
> >> parallel loops, too, and it follows the "Delphi-language-paradigms" a
> >> lot tighter than using TThread.
> >> I don't suppose that anybody will start implementing real parallel loops
> >> like suggested on the wiki page any time soon.
> > The examples are very artificial to demonstrate the problems.
> > Some real world examples / tutorials should be added on a new page. Parallel
> > algorithms are seldom taught in books/schools/university, so programmers are
> > not used to them. This is slowly improving.
> For me the whole OpenMP approach is really artifical, I don't see a real
> use in real world code for it honestly.
It is common in the scientistware I benchmark daily. The evil geniuses
parallelize their applications by placing a some OpenMP hints in their
However, MPI is de facto the standard for parallelization and this is most
used. There are people who believe OpenMP is more efficient than MPI
(directly reading from another threads its data is more efficient than
sending messages between processes). However, modern MPI implementations
use shared memory and RDMA between processes to counter this; I have
seen little practical evidence that OpenMP is faster than MPI.
More information about the fpc-devel