[fpc-devel] The future of fpmake

Hans-Peter Diettrich DrDiettrich1 at aol.com
Fri Apr 1 12:31:18 CEST 2011


Marco van de Voort schrieb:

>> In a new approach I'd provide the interpretation of existing MakeFiles,
>> and extend it to the specific needs and capabilities of the FPC/Lazarus
>> project (package...) model.
> 
> Personally I don't think the core makefile principles are worth preserving
> at all:

ACK. A build system based on the FPC/Lazarus project management, and 
Pascal "uses", can operate on a higher level (comparable to automake 
source files, instead of final makefiles).

> 1. they are mostly generated anyway. The .fpc files are the info, not the
>    makefiles themselves.
[that's the mentioned automake level]

> 2. they totally pass-by certain core FPC principles (like being to able to
>    compile multiple files in one go. In the packages this is worked around
>    using buildunits)
> 3. For e.g. a multithreaded compiler, the number of files to be handed to
>    the compiler should increase, not decrease (*)
> 
> (*) there are multiple issues here to fix:
>   - compiler is not mutithreaded.

After my tries I'm not sure whether parallel compilation will really 
speed up compilation. In most cases all but one modules are in 
"suspended" state, while their "uses" clauses are evaluated by the compiler.

When a (Lazarus) project uses packages, then all required packages can 
be compiled in parallel. This doesn't require an multi-threaded 
compiler, when every package can be compiled by a new compiler process.

>   - module system is scheduled for rewrite for about a decade now

Does there exist a roadmap or wishlist, of the intended changes?

>   - packaging (also static, iow DCP) could decrease number of files and
>     speed up compiler. But what about backend tools?

Packages must be recompiled as a whole, if only one contained unit must 
be recompiled. Disk I/O (file cache) usage could be reduced by a map 
file with the options of all compiled units. But when used units must be 
loaded anyhow, even if they deserve no recompilation, the benefit of 
such an added map is questionable. Okay, it could be given a try, in a 
renewed module system.


>> WRT time consuming jobs, observed e.g. in building FPC itself, I'd
>> integrate "clean"ing of the output directories, together with means to
>> reduce the number of such directories, as far as possible. I'd also add
>> FPC as an integrated module, with a reusable file cache to prevent
>> excessive directory scans and loading of configs and modules.
> 
> This only works when integrated with FPC, as the compiler finds its own
> files. But that is all far, far away. The point is more that the current
> transition horse makeshift situation (fpmake) should be adapatable in the
> future.

I dunno about platform restrictions, but a shared file pool looks 
feasable to me. Perhaps fpmake (or Lazarus) can hold a module map or 
entire modules in an MMF, shareable by all compiler instances or processes?

> At least currently its data is fairly abstract (though I'm not entirely
> happy having to specify fileinfo manually.  That should be based on compiler
> feedback. But this is very difficult in practice since the exact
> buildprocess is OS dependant)
> 
> One big step of fpmake is the killing of tools outside the FPC project like
> make.

ACK!

>> The compiler module can be used in the Lazarus IDE as well, either as a
>> built-in module or as a shared library, with beforementioned ability to
>> share the module cache with the application. Unfortunately parallel
>> compilation will never become available with FPC, so that other chances
>> for parallel processing should be explored. As long as the build tasks
>> are based on input/output files (disk I/O), a shared file cache looks
>> like the most promising way for speeding up an build process.
> 
> Such plans are on the decade scale. If you are interested in it, I'd start
> creating as many internal assemblers and linkers as possible and further the
> module-rewrite, and the organizing of units into larger concepts (packages
> or whatever) and reducing the proliferation of files.

My experience with (im)porting tools is, that only the interface should 
be modified, from reading commandline-arguments to a straight forward 
configuration of the internal variables. The worker code should stay as 
is, so that bugfixes and other improvements deserve not more than a 
recompilation of the updated sources.

Have e.g. a look on Abbrevia vs. 7zip. Abbrevia never comes to an end 
with even the maintenance of a few archive formats, while 7zip (as a 
shell) can be extended easily by only adding an interface to every new 
compression module.

The same concept could be applied to fpc, so that it can be integrated 
tightly into fpmake - at least for non-cross-platform builds.

>> When fpmake consists of dedicated modules for various tasks, these
>> modules can be used to build other applications, like test suite runners 
>> or profile editors.
> 
> I think the core bit of fpmake/fppkg should be 
> 
> (1) killing off external tools, and the compromises to package metadata that they
>     force upon up.
> (2) start developing a logical package concept, both from a building and
>    packaging concept. Maybe with a file based representation (.lib/.dcp like)
> (3) provide some hooks for packages outside the current main build tree to
>    integrate with release engineering.
> 
> Both 1 and 2 as a start to more involved systems. At future compiler devel
> implementing some in-compiler package/library concept or unit cache will
> more easily roll it out.
> 
> Point 3 is because there is an enormous rift between package in the tree,
> and outside of it. By cutting down the administration for this, we might get
> a more workable external package management.

I don't understand the packages issue right now. What's the mentioned 
"tree"?

DoDi




More information about the fpc-devel mailing list