[fpc-devel] Discussion about "Dynamic packages"

Sven Barth pascaldragon at googlemail.com
Thu Apr 13 22:27:06 CEST 2017

On 13.04.2017 20:36, Bishop wrote:
> 04/13/17 10:47:54, Michael Van Canneyt <michael at freepascal.org>:
>> Dynamic Packages will in each case be optional, they will not be not
> mandatory.
> The main question is a bit different. Is performance penalties from
> Dynamic Packages will be optional? I try show example. Let's provide
> that we want to check whether the class instance is a successor of some
> class. So we call TObject.InheritsFrom. But if we use indirect tables in
> VMT, we need 2 DEPENDENT memory reads (this mean we rely on memory
> latency, not memory speed) to reach perent VMT. Same for ThreadVars.
> Normaly they addresses into TLS-block can be allocated by linker, so we
> dont need additional memory read (this rely on CPU cache too) when we
> access ThreadVar varible.

No, those changes are not optional, though in those cases where the
compiler can avoid them (e.g. inside the same unit) it already tries to.
And for global variables you can use {$importdata off}.
Please not that our threadvars are *not* allocated by the linker. Each
access to a threadvar goes through the fpc_relocate_threadvar function
(just check the assembler code to see what I mean).

> 04/13/17 12:28:02, Sven Barth via fpc-devel
> <fpc-devel at lists.freepascal.org>:
>> Yes, they are specifically for Pascal code and more specifically only FPC code as you can't use a Delphi dynamic package with FPC or the
> other way round. Plugin systems are indeed one of the uses of dynamic
> packages (with the main benefactor probably being Lazarus), but as I
> wrote above sharing of binary code is also an important point.
> As i see it, Lazarus case in a "plugin" case.
>> Especially if your own "product" consists of multiple executables that
> share the same code.
> Whether really it is so important to outweigh disadvantages from
> performance drop in production? If we speak about the place on a hard
> drive, then now, it seems, it isn't urgent. The most of program products
> today in 99% of cases taked by resources and data, not by code. If we
> speak about RAM, all this indirect tables and etc. take it too. Yes,
> possible less that shared code, but how many RAM we will realy safe? Is
> it will be even visible in front of application`s data? Plus, if we have
> project with so small and so many executables, possible, the best
> solution will be go on "BusyBox-style" way?

Delphi uses these indirect accesses from the beginning. Do you hear
anyone really complain about performance problems due to indirect accesses?

And it's not about saving RAM or disk memory! It's about *binary code
reuse*, the ability to fix a bug in multiple executables by merely
fixing the one bug in a package.

>>  > Во время моего общения с Sven Barth он писал "With dynamic packages you can share classes, strings, memory, etc. between the
> modules (the main binary and the different package libraries)". Let's
> look at the most widespread operating systems. This will be Windows and
> Unix-family. In Windows every application starts from ntdll.dll and walk
> via kernel32.dll and only after that go to "main"-function in EXE file.
> So kernel32.dll always loaded. And its already have not bad memory
> manager (Process heap functions group). Why dont use it? It allow share
> memory with C code too (and strings with pascal code). Its already exist
> in application memory. In Linux if application use shared libraries it
> use libdl.so witch need libc.so. So we already have libc heap. As i know
> in FreeBSD and Solaris situation same..
>> It isn't merely the memory management. The Object Pascal RTL exists of much more than just memory management: there is exception handling, the
> RTTI, resource strings, unit initialization and finalization. This are
> all thing that other languages either have no clue about (e.g. C) or
> have their own implementations anyway.
> First of all sorry for not english part in my prev message. I forgot
> translate in before sending. Then, i undestand this. But i think that
> interoperation it shall be considered in parts. In ideal, some of this
> part is OS-level decision (like SEH in Win64). Interoperation is realy
> good think, so it must be done in maximal possible way, that don`t touch
> performance of course. If we can share memory regions with other code
> without any problems better do it.
>>unit initialization and finalization
> On windows there is special sections for it, on Unixes SO-library have
> init, plus libc have atexit.

These special sections are executed by the C runtime. We don't link
against the C runtime at least on Windows and on Linux however which
reduces the dependencies.

> I think it be good if we at less minimize FPC-specefic parts (when this
> dont make aditional restrictions). For all other - thaks, now i see
> problems and will think about them. But as i see for now, for some
> things (memory manager, exceptions) good way can be in comunicate with
> GCC developers (and C/C++ like core langs) and OS developers (i mean for
> linux, freebsd, etc.). This is very hard way, but if someone somewhere
> go by it, this can make situation totaly different :)

No, the FPC specific parts are there for a reason. They aren't there
just for the fun of it.

>> Units are compiled in a way that they can be used inside a package and (as it is now) outside of it. Whether your executable uses dynamic
> packages or not is determined merely by a compile time option of your
> executable (namely if you specify to use a dynamic package using -FP,
> e.g -FPrtl).
> This is a key of problem. If we already use in codes all this indirect
> tables and etc, already there is does not matter use we dynamic packages
> or not. We already got all performance and memory penalties. This why i
> offered make too different modes.

No. This will needlessly complicate things.


More information about the fpc-devel mailing list