[fpc-devel] LinkLib Issues In Lazarus and in FPC-2.0.2

Marco van de Voort marcov at stack.nl
Mon Jul 17 22:36:16 CEST 2006


On Mon, Jul 17, 2006 at 04:37:07PM +0200, Jonas Maebe wrote:

> >So, in order to support this, we would need to write a hack when a
> >{$linklink gtk} occurs and write gtk-config specific code. For a  
> >different
> >library we would need a different hack.
> 
> Of course not. You should never change a {$linklib magicstring} into  
> some command. The exact command(s) to be executed would have to be  
> embedded in the unit with a compiler directive (and probably, just  
> like library name, be ifdef'd per platform).

The setup of the whole linkaliasing is to change the $linklib into mere
identifiers (while compiling units) that can then be mapped at will to real
libnames when determining how to link, with the default being 1:1. This
means we could move the whole ifdef {$linklib mess to a platforms (or
distro) specific fpc.cfg #include and more importantly OUT OF SORUCE
(regardless of internal or external, though with external more of the
discovery process must be done up front).

I want more out of source.

> >Also, fact is we do have an internal linker and it will hopefully be
> >used for most platforms in the future.
> 
> I do not see this happening any time soon for e.g. Mac OS X.

I didn't see it happening at all half an year ago, but thanks to Peter...

> > That would be a pity, since OS X is the se Ultimately it will go in the
> >same direction as with the assembler readers even for external linkers:
> >In order to fully understand the parameters, the compiler must understand
> >them.
> 
> Yes, this is a downside of integrating everything (with the upside  
> being speed and memory usage, obviously).

It is inevitable. But that doesn't mean there is only one way to integrate
stuff. It must be done cautiously, and pragmatic.

> >Generate 3 units, and introduce some unit aliases for gtk like  
> >Marco did
> >for libraries, problem solved. That is still a lot more generic  
> >?solution
> >than *-config and a lot less work.
> 
> Of course making the developer type in everything manually is always  
> more generic and less work for us.

The idea is being able to be able to generally fix an existing release
_after_ a change becomes know (read: a distro release). It can be as simple
as saying "copy and paste this set", or download this source and run the
utility.

> That is not my point, and I'm not  
> against providing this functionality. As I said in my reply to Marco,  
> I consider it complementary, as a last resort if all else fails.

I consider pkgconfig very important. But I want to have it feed a store, a
FPC configuration file.

This because I want to be able to say "mail me your compiler state", and
send back a new one. Too much automatism will make things less transparant.
To the user, but also to the one who has to support it.

Supporting multiple versions is hard, and other people designs are not
always perfect or tailored to our use. Do I have to mention XCode here? :-)

> >that: if we want to be a more professional tool than C, we should not
> >copy the defficiencies. Makefiles are one of them, Autohell is another,
> >and I consider this one such a thing as well.
> 
> I think all this "competing with C" and not-invented-here syndrome is  
> downright silly, along with all the claiming that most bad things  
> come from C.

That was Almindor not me. However there is a point in this that I agree
with. All this infrastructure is not designed for anything but the standard
toolchain. It is not like e.g. Win32 for any vendor to build on top off.

Also even for them there are bumps, but these are polished away for the
default systems in the distro release engineering. E.g. for a new compiler
the whole build environemnt is changed and made consistent. But for us stuff
breaks.

That is not "not-invented-here" syndrome or C phobia, but a simple reality.
I agree a bit with Florian that there is a fundamental problem in the Unix
(and then specially the Open Source parts) way of doing here. Something I
would never have agreed on say 5 years back.

> If anything, it comes mostly from the standard Unix  
> design philosophy (small, well-delineated tools which perform their  
> core function and *only* that core function)

_eighties_ Unix philosophy. Have you seen the sheer size of an avg Linux
distribution? Install *BSD, build a world, and see it rolling over your
screen :-).  Unix nowadays is gigantic, and monolithical. These are old
sentiments.

> Most *nix C compilers follow that principle very well, but other C
> development environments are more integrated (like commercial Mac and
> Windows IDEs, where you sometimes can hardly e.g. tell the compiler from
> the linker).

IMHO the C _compiler_ way of things is not Unix philosophy, but simply the
lack of memory of K&R's their PDP7. Batchwise processing allowed less
memory. See the various Unix history books. Of course there was much
revisionism afterwards.
 
> we are going to keep getting stuck by having to reimplement everything
> because we cannot work with what is already there

Nobody said that. And we don't. We linked to C libraries since before I
joined FPC. I personally made the libc port, _before_ there was Darwin on
the horizon, just because I want to try to have the best possible system.
Both from a support as user perspective. 

But that doesn't mean we mindlessly must follow a path that wasn't designed
for us. We are different, and can never mask that on Unix, until we can
interpret C headers and can use auto* seaminglesly. That has burdens, but
also advantages.

[[more perceived elitism deleted]

> To get back on topic: the more hacks you add to work around already- 
> existing infrastructure "because we don't want to rely on external  
> tools", the more you force yourself to keep doing that more and more  
> forever in the future. If you like that, fine. I don't.

I want to base on a case by case basis, with a bit of caution.

LD yes, hard to avoid, even as backup. gmake: yes, at least until we have
something more fitting for us. lipo: don't see a problem.

gtk-config? Seem to do awfully lot, and call scripts from the compiler
binary, selecting one by some magic way and parse it and try to merge it
with our own state and support that? 

I'm sorry, but you accuse us of C phobia, but this all sounds a like a leap
of faith that only a slashdot linux newbie would make, not a seasoned
decade old OSS developer.

> >This we agree on, the compiler should do the work.
> 
> Above you said that the compiler is the wrong location?
> 
> Anyway, I never said I agree with this principle (although it sure is  
> very convenient in a lot of cases, in others it's very annoying  
> because it requires that you roll your own solution for everything).  
> I simply said this is how the compiler works.

Let's forget about all the advocacy and focus on this particular case. What
can we do as a grander plan to tackle the increasing demands on linux
backend configurability and ease of use at the same time?

I assume you don't have time atm to hack up an initial -config
implementation to see a few distro generations how it is going?

Specially your remarks (in the first quoted paragraph above) about more
linker data in source scares the hell out of me from a support perspective.



More information about the fpc-devel mailing list