[fpc-devel] Compiler bottlenecks]

Jonas Maebe jonas.maebe at elis.ugent.be
Thu Jul 15 09:55:52 CEST 2010


Florian Klaempfl wrote on Thu, 15 Jul 2010:

> I wonder if zeroing memory blocks (so when allocating them we know
> already that they contain zeros) and preparing new register allocators
> in a helper thread could improve this.

Possibly, yes. At least most OSes also zero pages in a background  
thread for exactly the same reasons.

I've also tried adding simple pooling systems to reduce the  
allocation/freeing time (a bit like mark/release), but the problem is  
that in many cases class instances can be either local or global  
(e.g., tai* can be added to some global stubs section or to a  
procedure's code, and parse tree nodes can become part of saved trees  
for inlining).

We could also parallelise writing out the assembler code for the  
external assembler, and possibly also some other list processing.

E.g., for N threads, start by simply walking over the list of  
instructions and store a pointer to the first and then every  
(list.count/N)th element (rounded up or down to the start of a new  
source line in case of the assembler writer). Then fire off the N  
threads to start processing the list at those points and let them  
store their output in temporary buffers. At the end, write the buffers  
out in the correct order.

There are currently some global dependencies (e.g. Darwin DWARF label  
numbers that are generated on the fly, and the current section type is  
kept track of for optimisation reasons), but it shouldn't be very  
difficult to resolve them. The same technique can probably also be  
used to parallelise at least parts of the internal assembler.

Especially when using DWARF, which causes a lot of tai constants to be  
generated, this could make a significant difference. And since the  
lists keep track of the number of elements, we can easily define a  
threshold used to decide whether to parallelise and if so, at most how  
many threads should be used.


Jonas

----------------------------------------------------------------
This message was sent using IMP, the Internet Messaging Program.




More information about the fpc-devel mailing list