[fpc-devel] Compiler trheads (was: NoGlobals branch)

Marco van de Voort marcov at stack.nl
Thu Aug 12 11:23:27 CEST 2010

In our previous episode, Hans-Peter Diettrich said:
> > So on start threads are created, and no new threads are created after?
> That's not a solution :-(
> *When* a parser becomes a thread, it can proceed only to the first 
> "uses" clause - then it has to create further threads, one for each used 
> unit, and has to wait for their completion. I.e. most threads are 
> waiting for other threads to complete[1].

No, the parser classes are. There is no reason to have them 1:1 with the
threads (except for that one threadvar) You can push the parser class on
some completion stack till the requirements are satisfied. The next time a
thread finished, the unit dependancy tree is walked to find the next
compilable module. (or a control thread could keep this kind of stuff
constantly updated, and e.g. prefetch files)

It would save you the overhead of constantly creating/killing threads _and_
make the control over how many (possibly full CPU using() threads run easier.

In a multiuser system (nightly builds, the webservers and php interpreter
count as users!) you might not want to stress the system to the knees.
There is a flaw here of course, the control thread can't really see if the
workerthreads are mostly blocked on I/O or really working. But users can
up the number of cores in typical make -j way if they want full utilization.
(though I think less upping is needed since hopefully the I/O cache hit
ratio will be higher since the compiler doesn't constantly restart.)

> That behaviour makes it questionable, whether parsers should become 
> threads at all.


> Furthermore it suggests (to me) a dynamic thread priorization, based on
> the number of other threads waiting for their completion[2].  At least we
> have to distinguish between processing the interface part (or loading the
> declarations from the ppu file), until which point all dependent threads
> (using that unit) are blocked.  Once the interface information has become
> available, the dependent threads can continue parsing and generating
> object code (.o and .ppu files).

Well, keeping the monster fed will be the major issue. Compiling multiple
main modules with their own settings (to do packages/ paralel without make)
would be good. 

So a mainmodule (for now assumed to be a packages/ buildunit) has its own
settings (-Fu dirs etc) associated to it, as well as a few global dirs.

Unfortunately, we can't just throw any unit from such builds into the
general unit cache (since e.g. httpd-x have duplicate unit names)

The easiest would be to flag a mainmodule to be added to the global (ppu)
cache (and thus be persistent for the rest of the run, hopefully speeding up
the many dependancies on fcl-base,fcl-xml), or to keep a local cache for
leaf packages and/or packages with duplicate names, to be discarded after
completion of the module.

But now I'm rewriting the architecture on a blank sheet of course, something
that is always dangerous.

> Brainstorming: What other models come into mind, for using threads in 
> the compiler?

I wouldn't go overboard. Having a few threads working is enough, the rest of
the work better go into the structures and mechanisms that allow them to be
fed, and to keep the compiler running long.

> and resume threads before they have finished? With a fixed number of 
> threads we IMO risk a stall of the entire compiler, as soon as each of 
> these threads is blocked for *other* reasons (memory management, disk 
> I/O...).

That can be mitigated by taking twice (or one and a half times)  the number
of physical cores, as typically done by make -j in such cases. If it is a
possibility that really adds an overall noticable delay.
> [2] How portable is a dynamic thread priorization?

You don't. Your control thread determines what module is compiled/resumed
next when a worker thread comes available. There is no reason to do this via
funky thread systems.

More information about the fpc-devel mailing list