[fpc-devel] Re: fpc-devel Digest, Vol 112, Issue 32

Mark Morgan Lloyd markMLl.fpc-devel at telemetry.co.uk
Fri Aug 23 10:57:58 CEST 2013


Steve wrote:

> Most of this is just pussy footing around the issue. Forgive me if I 
> misrepresent
> your position here but it seems that you maintain that the implmentation 
> should
> use a modern instruction set because 1) it generates simpler assembler

Yes.

> 2) it supports Linux and hence has lots of FP developers

Partly.

> and 3) because GCC does so.

No.

In slightly more detail:

1) The slightly newer opcodes make the 390 look more like the canonical 
CPUs that most people are used to these days. Since any attempt to 
implement a port without the help (or at least tolerant supervision) of 
the core developers is doomed, I think that using recently-introduced 
facilities is justified. Note that I'm /not/ suggesting going over to a 
full 64-bit implementation, since this would really complicate a 
subsequent port back to strict 390 or 370 compatibility (i.e. 32-bit 
data and 31- or 24-bit addresses).

2) If an existing FPC developer wants to get involved, it's not 
reasonable to expect him to have to work up the learning curve of MVS 
before he can actually run the target environment. Linux on Hercules is 
a no-brainer.

3) GCC is only relevant if external libraries are to be linked, at which 
point it defines the ABI.

I'm /not/ banging the drum unreservedly for GCC and Linux, but IBM (and 
many other companies) promote it as a "universal API" and I like to 
think that they're not total idiots.

Similarly, I'm not banging the drum unreservedly for GNU's  as 
assembler etc., since I recognise that a great deal of useful work has 
been done using IBM's assemblers. But the GNU tools are freely 
available, while as I understand it IBM (and clone) assemblers aren't: 
it's not realistic to expect developers to sign a "no commercial use" 
agreement etc. since this would infect the entire project in a way that 
the (L)GPL doesn't.

So I think that it's worth considering 32-bit Linux using GNU tools as 
the initial target. But it's not my choice, I'm only the guy with a foot 
in both the FPC and mainframe camps who's doing his best to prevent them 
drifting off in uncomfortable directions.

Picking up one of Sven's points:

 > If 360 assembly code can be used on modern processor variants as well
 > I see no problem with targeting that at first only. The point with
 > using Linux as a first target is that you would not need to implement
 > a new RTL, but only the code generator and the processor specific RTL
 > code. The point for gas/ld was simply that we have existing writers
 > for these two, but writing your own writers for IBM specific tools
 > isn't rocket science either... But it's another thing you'd need to
 > implement.

There's also the issue of the assembler reader (used, if I understand 
things correctly, to parse inline assembler mostly in the lower-level 
bits of the RTL). This seems to cause almost as much problem during 
development as the assembler writer, and having to support (or at least 
pass through) complex assembler macros isn't going to make things any 
easier.

> Your choice is really nothing to do with me. I don't plan on getting
> involved. I just don't like to see half-truths and misunderstandings
> being passed off as the 'one true way'.

If I were promoting a "one true way" I wouldn't be doing my best to keep 
open the secondary option of getting FPC running on (or at least 
generating code for) older OSes such as freely-available versions of 
MVS, VM/CMS and so on using IBM-format assembler. But I don't think 
these are viable primary targets.

-- 
Mark Morgan Lloyd
markMLl .AT. telemetry.co .DOT. uk

[Opinions above are the author's, not those of his employers or colleagues]



More information about the fpc-devel mailing list