[fpc-devel] FPC/Lazarus Rebuild performance

Juha Manninen (gmail) juha.manninen62 at gmail.com
Sat Sep 11 12:25:14 CEST 2010

On Saturday 11 September 2010 09:55:14 Martin Schreiber wrote:
> On Friday 10 September 2010 17:43:59 Adem wrote:
> > Sometime ago, there was a brief mention of multi-threading FPC would be
> > counter productive because compilation process was mostly disk IO bound
> > --this is what I understood anyway.
> > 
> > I wanted to check to see if disk IO was really limiting FPC/Lazarus
> > compile performance.
> Interesting is that Delphi 7 compiles about 10 times faster than FPC on the
> same machine.
> http://www.mail-archive.com/fpc-devel%40lists.freepascal.org/msg08029.html
> Results with more code and FPC 2.4:
> http://thread.gmane.org/gmane.comp.ide.mseide.user/18797
> One would think Delphi and FPC need the same disk IO?

I read the threads. My guess is also that the slowness comes from searching 
and writing many files in big directory structures. It is slow even if the files 
are cached. Also starting a new process is slow.
These OS kernel tasks are difficult to measure and process monitors don't give 
reliable results.

Create an API for integrating FPC with IDEs and special "make" programs.
The API would pass info about exact file names and locations.
It could also pass the whole source memory buffers.

Then build FPC as a dynamic shared library. There would be 2 FPC binaries 
then: the traditional executable, and a shared library to be called from 
external programs.

For example Lazarus IDE already scans a lot of information about project's 
files and directories. That info could be "easily" passed to compiler.
Codetools in Lazarus already parses lots of code. The whole parsed interface 
section could be passed to compiler (symbol table and whatnot).
... but that is the next step, let's stick with file info now...

Then there would be a new dedicated build program which reads all project info 
first and then calls the compile function (not a separate process) in the 
shared lib for each source file.

No expensive process startups, no searching for include files again for each 
source file from huge directory structures.
I bet it would make a BIG difference in speed.
Delphi must be doing something like this although I don't know details.
After that it makes sense to make the compiler multi-threaded. It could scale 
almost linearly with CPU cores (maybe).

I haven't seen such ideas in these mailing lists. Is it possible I am the first 
one to have it? I don't believe because the idea is so obvious.
If there is already such development with the new make tools then sorry for my 


More information about the fpc-devel mailing list