[fpc-devel] integer, cardinal

Vinzent Hoefler JeLlyFish.software at gmx.net
Mon Apr 18 15:46:48 CEST 2005

On Monday 18 April 2005 10:32, Marco van de Voort wrote:
> > On Monday 18 April 2005 09:02, Marco van de Voort wrote:
> > >
> > > I typically use enums. They suffer from the same to-disk problem
> > > though, but that can be remedied using the proper directives.
> >
> > Well, I don't think I will ever use enums to define things like
> > frequency limits.
> Since I don't use 16-bit systems, and frequencies above 2GHz are not
> my domain that is not really a problem for me.

That's not what I meant. The problem arises when the actual code is 
wrong and the supposed limit is reached. That's what a range check 
really is for. :)

I consider:

|   Crossfeed_Ramp = 300 .. 1000;

still more readable than the declaration of explicit constants and the 
use of "just some int" variables. The point is in such cases I even 
don't care what size any variable of that type would be, because here 
it simply doesn't matter. Of course, such types are definitely not 
useful when you want to store them in files, exactly for the same 

> There are similar things already defined in unit ctypes. This way you
> can for FPC make even this unit independant of what happens with
> integer/qword etc, because the unit is adapted per platform.
> So
> {$ifdef FPC}  // 1.9.x +
>   mysint8type = ctypes.cint8;
>   myuint8type = ctypes.cuint8;
> {$ELSE}
>   // inferior compilers ( :-) )
> {$endif}

Yeah. But I'm somehow afraid of the "C" in it. ;-)

> > Alas, AFAICS there is no way to define the range and the storage
> > size of a type independent from each other in FPC?
> No. Only 1,2,4,8 sized integer types exist, in both flavours, and in
> ctypes identifiers are predefined.

Yes. I understand that FPC has no notion of biased representation, but 
sometimes it still can be useful to define a constrained type with a 
certain range but still give the size independently (and set it to four 
bytes for instance, even if the type definition itself would allow a 
smaller size).

> Other sizes are rarely used (3 byte sometimes in RGB handling code),

Hmm, rarely, yes. Still I once had to interface to a DSP56K, which has a 
native word size of 24 bits.

> and would only unnecessarily complicate the codegeneration.

Agreed, but usually I am looking at all that stuff from the view point 
of the programmer and not of the compiler-writer.

> > > And it is a lot more laborous.
> >
> > Well, that's not really an argument (at least not on its own). The
> > work *can* pay off, even for pure documentation purposes.
> IMHO not really. I'd rather spend some time on the fileformats and
> their docs. Such code is typically near-mechanically creating
> conversions for data formats, hardly worth item-per-item
> documentation.

See my small example above. The type is almost self documenting. Some 
more comments would be needed, if I just declared it as word/LongInt/
$whatever or (to not introduce too many "different" types) not at all.

> > I kind of agree with that. My bad experience comes from the fact
> > that (some of) the people who wrote the code I am maintaining and
> > moved to Linux now just wasn't done that way. Binary formats where
> > used for almost everything, nobody cared for alignment and so on...
> > So every once in a while some internal structures changed and
> > *kaboom* the new version couldn't read the old data.
> IOW the problem was bad programmers, not evil binary formats ;-)

If you'd like to put it that way, yes. But that's why we use Pascal (or 
in my case, even Ada): To protect ourselves from doing bad things too 
easily. :)

> I learned a lot of the tricks with dealing with binary files in my
> BBS era days. This due to the 16->32bit changes, different compilers,
> the large amounts of versions of binary files floating around, and
> the danger of truncation by modem.

I didn't realize you were thaaaaat old. <gd&r>


More information about the fpc-devel mailing list