[fpc-devel] integer, cardinal
Marco van de Voort
marcov at stack.nl
Mon Apr 18 12:32:56 CEST 2005
> On Monday 18 April 2005 09:02, Marco van de Voort wrote:
> > > Well, and I actually do this in a major app at work. Not on
> > > everything, of course, but it can heavily simplify some stuff, for
> > > instance because I can use the Low and High-attribu^Wfunctions on
> > > the type which is safer than using constants, because the compiler
> > > can do the work for me.
> >
> > I typically use enums. They suffer from the same to-disk problem
> > though, but that can be remedied using the proper directives.
>
> Well, I don't think I will ever use enums to define things like
> frequency limits.
Since I don't use 16-bit systems, and frequencies above 2GHz are not my
domain that is not really a problem for me.
> Ok, in that case you're probably out of luck and have to use fixed size
> types in almost any case. Still, you probably want to define them
> separately and explicitely instead of relying on some compiler
> behaviour. At least that's what I do:
There are similar things already defined in unit ctypes. This way you can
for FPC make even this unit independant of what happens with integer/qword
etc, because the unit is adapted per platform.
So
{$ifdef FPC} // 1.9.x +
mysint8type = ctypes.cint8;
myuint8type = ctypes.cuint8;
{$ELSE}
// inferior compilers ( :-) )
{$endif}
> Alas, AFAICS there is no way to define the range and the storage size of
> a type independent from each other in FPC?
No. Only 1,2,4,8 sized integer types exist, in both flavours, and in ctypes
identifiers are predefined. (actually ctypes is defined as containing types
for the reference C compiler on that platform for header purposes, but the
size-invariant types are also defined and commonly reused)
Other sizes are rarely used (3 byte sometimes in RGB handling code), and would
only unnecessarily complicate the codegeneration.
> > And it is a lot more laborous.
>
> Well, that's not really an argument (at least not on its own). The work
> *can* pay off, even for pure documentation purposes.
IMHO not really. I'd rather spend some time on the fileformats and their
docs. Such code is typically near-mechanically creating conversions for data
formats, hardly worth item-per-item documentation.
> Yes, I think I can understand that. In that case I too would use binary
> formats, I guess.
(a nice example is also the game angband. x86 savegames work on ppc systems
etc, and IIRC I tried a 64-bit alpha running OpenBSD too once)
> I kind of agree with that. My bad experience comes from the fact that
> (some of) the people who wrote the code I am maintaining and moved to
> Linux now just wasn't done that way. Binary formats where used for
> almost everything, nobody cared for alignment and so on... So every
> once in a while some internal structures changed and *kaboom* the new
> version couldn't read the old data.
IOW the problem was bad programmers, not evil binary formats ;-)
I learned a lot of the tricks with dealing with binary files in my BBS era
days. This due to the 16->32bit changes, different compilers, the large
amounts of versions of binary files floating around, and the danger of
truncation by modem.
Adapting to endianess was only a small change.
More information about the fpc-devel
mailing list