[fpc-devel] integer, cardinal

Vinzent Hoefler JeLlyFish.software at gmx.net
Mon Apr 18 12:11:58 CEST 2005

On Monday 18 April 2005 09:02, Marco van de Voort wrote:

> > > > That's why Pascal has range types. Define the range you need,
> > > > and don't use "just some type" which has the range you think
> > > > you will need.
> > >
> > > I actually tried this in a major app at work.
> >
> > Well, and I actually do this in a major app at work. Not on
> > everything, of course, but it can heavily simplify some stuff, for
> > instance because I can use the Low and High-attribu^Wfunctions on
> > the type which is safer than using constants, because the compiler
> > can do the work for me.
> I typically use enums. They suffer from the same to-disk problem
> though, but that can be remedied using the proper directives.

Well, I don't think I will ever use enums to define things like 
frequency limits.

> > Yes, of course, that's the outside world.
> But if you have only business data, nearly everything is directly
> connected to it. One only moves the problem (from declaring types
> fixed size to data-spooling).

Ok, in that case you're probably out of luck and have to use fixed size 
types in almost any case. Still, you probably want to define them 
separately and explicitely instead of relying on some compiler 
behaviour. At least that's what I do:

-- 8< -- snip --

{                                                                      }
{ hardware interface types with specific sizes                         }
{                                                                      }
{ PLEASE  use these types in real hardware interfacing stuff only, not }
{         as C-like integer replacements!                              }
{                                                                      }


   Signed_8    = ShortInt;  //  -2**7 .. 2**7  - 1
   Unsigned_8  = byte;      //      0 .. 2**8  - 1
   Signed_16   = SmallInt;  // -2**15 .. 2**15 - 1
   Unsigned_16 = word;      //      0 .. 2**16 - 1
   Signed_32   = LongInt;   // -2**31 .. 2**31 - 1
   Unsigned_32 = LongWord;  //      0 .. 2**32 - 1
   Signed_64   = Int64;     // -2**63 .. 2**63 - 1
   Unsigned_64 = QWord;     //      0 .. 2**64 - 1


end {Interfaces}.
-- 8< -- snip --

Alas, AFAICS there is no way to define the range and the storage size of 
a type independent from each other in FPC?

> And it is a lot more laborous.

Well, that's not really an argument (at least not on its own). The work 
*can* pay off, even for pure documentation purposes.

> > Maybe, you have to do such things more often, but - no offense
> > meant - earlier experience led me to believe that binary file
> > formats are evil.
> Textformats vary with systems too. No solution either. (e.g.
> encodings, lineendings and other control characters).

Well, for the lineendings you already did the job for me and anything 
else here is currently bound to be either pure ASCII or XML (where I 
can stand the bloat that comes with it ;-).

> And besides
> that, I'm spooling millions of objects to disc during startup and
> shutdown. Cooked I/O is magnitudes slower. (I work with
> readers/writers, text-IO output is supported for debug purposes)

Yes, I think I can understand that. In that case I too would use binary 
formats, I guess.

> Binary formats are IMHO not evil. It's more that the simple basic
> rule of programming applies to it, think things through thoroughly
> before starting, and then the best effort deals with the bulk of the
> problems, and for the rest make custom solutions if the problems
> actually arise. One simply can't tailor to every possibility without
> making things costly or usuable.

I kind of agree with that. My bad experience comes from the fact that 
(some of) the people who wrote the code I am maintaining and moved to 
Linux now just wasn't done that way. Binary formats where used for 
almost everything, nobody cared for alignment and so on... So every 
once in a while some internal structures changed and *kaboom* the new 
version couldn't read the old data.

> > They tend to change too often, they tend to use types that don't
> > even survive half a decade, and even if this doesn't matter known
> > size types won't save you from the Hell of Endianess. And if you
> > don't have that problem, you don't have it all. ;-)
> How do read in EBDIC textfiles btw?

I don't, luckily our platforms don't include ancient IBM mainframes. ;-) 
And if they would, I'd either use COBOL or write a translation unit for 

> And unicode files in some transient standard?

Again. I don't. I just use plain old 7-Bit-ASCII. If I'd want to be 
non-portable, I'd use binary files, you know. ;-)


More information about the fpc-devel mailing list