[fpc-devel] Russian locale information not compatible withFPClocale variables

Daniël Mantione daniel.mantione at freepascal.org
Wed Jul 30 22:52:21 CEST 2008



Op Wed, 30 Jul 2008, schreef Boian Mitov:

>   Hi Joost,
>
> Actually the trend started probably ~10-15 years ago with the DSP processors. 
> Then along came Transmeta, and the Itanium, then there ware the GPUs 
> including from NVidia, and there is PlayStation 3. They all use this type of 
> approach. The first massive multicore I am aware of is the new Sparc 
> http://en.wikipedia.org/wiki/UltraSPARC_T1 and 
> http://en.wikipedia.org/wiki/UltraSPARC_T2 from SUN. Intel actually is 
> playing a bit of a catch-up game. The shared memory however has worked 
> perfectly for the DSPs for many years, so it has for the Itanium, Crusoe, the 
> GPUs, and PlayStation. The future is not likely to be in faster systems, but 
> in more cores. This seems to be the consensus lately among the processor 
> architects. Intel already demonstrated 100+ core processor last year. This 
> year we expect the first 16 core processors to hit the market ( 8 HT cores), 
> and the direction is very clear. Any compiler vendor, or developer, should at 
> least be paying attention ;-) .

It is extremely important to note the number of cores cannot scale 
endlessly. The limits of SMP based systems were discovered in the HPC 
world more than 10 years ago. 64 or 128 cores seems pretty much the limit, 
few machines with more cores have been produced.

The reason is that the cache coherency algorithms don't scale. You get too 
much traffic between processors. This is already visible with eight 
socket Opteron systems today, the performance is disappointing, despite 
the very good NUMA-style design of the Opteron processor.

In fact, we are getting close to the limits. With AMD's announcement of 
the 12 core Magny Cours processor, we would have 96 cores in an 8 socket 
machine. I have to see this happen and I am sceptical it will perform. 
Quad socket may well limit for this processor.

The HPC world did move from SMP systems to clusters, distributed memory 
systems. Multi-threading was replaced by message passing, and there we are 
today, Roadrunner has been built, the first 1 petaflops computer.

The end of message passing has now been announced, it for sure we won't be 
able to do exaflops with passing. Still, this is not very relevant for us 
desktop users.

What would be interresting for us, is what will happen on the desktop. 
What will happen if we can afford 100+ cores in our desktops? Will our 
desktops become clusters with an interconnect network?

Now, of course, many people are still coding single threaded applications. 
Parallelizing is a need for the future. But know that the multi-core race 
won't last as long as the megahertz race, whatever cpu manufacturers tell 
you about 100+ core processors. We may already be about halfway at the 
moment.

Daniël


More information about the fpc-devel mailing list