[fpc-devel]Re [2]: Some more compiler problems

Peter Vreman peter at freepascal.org
Fri Apr 6 09:57:27 CEST 2001


> PV> From: Peter Vreman <peter at freepascal.org>
> >>  But as Peter already said, the heap manager is very optimized. We
> >>  got some patches a couple of months ago to make it slightly faster
> >>  (10% or so), but I don't think they're already applied.
> 
> PV> Because the patches made the heapmanager exponantional slower. The
> PV> compiler cycle took then about 5 minutes instead of 1 minute.
> PV> It's always a speed versus size tradeoff...
>                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
> 
> Are You shure that small RAM usage always assume lower speed?
> What about running application which need 100 Mb RAM on PC
> with 16 Mb? And how about UNIX which start to kill applications
> on low memory condition?

Ofcourse a new faster algoritm that uses less memory is the best solution. But in these
periods where RAM is cheap and all new PCs get at least 64MB and most already 128MB and
game-pcs already 256MB. You must decide why not using a little bit more memory to get more
speed.

> About patches: it is slower because it is trying do not allocate
> new block from system if we can get memory by mergeing small blocks.
> But this can be improved: ideally we must use memory limit value
> for process (`man limits` for UNIX) to start such allocation
> politic. If process memory usage is lower than 2/3 of this limit
> then we can skip small blocks mergeing: it is what causing speed down.

I already commented in a private mail on your heap patch. I didn't say that everything you
did was wrong. The heapblock allocation at the end of a OS memoryblock is a good thing.
And also the deferred merging of blocks approach is good. But there were some small things
like the small block merging that was taking to much time.

> 
> In such case we would work fast when process memory usage is
> low and _fast_ when process memory usage is big: because we continue
> to use RAM and do not start swapping.
> 
> If You add to runtime a routine which return such limit value
> then I modify a patches...
> 
> Regards,
>     Sergey Korshunoff
> 
> PS: There is some strange memory usage in PFC: if to interrupt a FPC
> compilation by Ctrl-C and then start it again -- I got new FPC much
> faster (more than 10 times) than when I do not interrupt a FPC making.
> This is true for my case (16 Mbytes of RAM). I got a result faster
> because after interruppt (second start) FPC use _much less_ memory
> (less swap is used). But why FPC
> keep all in memory? May be it is not to hard to implement such politics:
>     -- we start a UNIT A compilation
>     -- we found a dependency on UNIT B (in `uses` or `implementation` part)
> X)  -- we free all memory allocated by UNIT A compilation and go
>        to compile UNIT B
>     -- repeat
> 
> As I think FPC currently do not perform a X step...

That's true. FPC keeps the symtables of all loaded units. Unloading the units after each
unit compilation and then loading the units (including the units were these units depend
on) again is hard to implement (what to do with a unit were only the interface was already
compiled?) and very terrible for the performance. In case of the compiler: for
compiler.pas all 100 units are loaded and unloaded, then pp.pas is compiled and that
requires than 101 units to be loaded and unloaded.







More information about the fpc-devel mailing list