[fpc-pascal] FW: Floating point question
James Richters
james.richters at productionautomation.net
Mon Feb 5 15:31:53 CET 2024
Sorry if this ends up being a duplicate, but I thought since it did not show
up after a few hours that is was blocked because I had attached a screen
shot to it. So here I removed the attachment and instead link to the screen
shot:
https://drive.google.com/file/d/1IJWSqR8UYgWRoq9oVwuOSa-XFd35FXZP/view?usp=s
haring
James
-----Original Message-----
From: James Richters <james.richters at productionautomation.net>
Sent: Monday, February 5, 2024 7:26 AM
To: 'FPC-Pascal users discussions' <fpc-pascal at lists.freepascal.org>
Subject: RE: [fpc-pascal] Floating point question
I ran this program in Borland Turbo Pascal 7.0 for DOS, and it does not have
this problem. AA, BB, and CC all produce identical results, But FPC, even
with {$Mode TP}, produces different results if I use 1440 vs 1440.0 I guess
Delphi compatibility is more important than Turbo Pascal compatibility even
if I am in Turbo Pascal Mode?
I wonder if Delphi even has this bug.
I propose that all modes should use the old way and only Mode Delphi should
do it the other way, Or better yet require a switch to allow the reduction
in precision, instead of always doing it and requiring a switch to turn it
off, because if you are using extended, you can't effectively turn it off as
there is no -CF80, or best of all figure out why it's a bug, and fix the
bug. The bug is that using something like 1440.0 causes single precision
and a loss of data, while 1440 or 1440.1, or anything else that isn't .0
does not have this problem. Precision should never be sacrificed, and the
user should not have to do something special to protect against a reduction
in precision.
When I learned Turbo Pascal back in the 90's this was never a thing, I never
needed to cast my constants to prevent low precision results. It wasn't in
my Turbo Pascal text book in technical school, because it didn't need to be,
constants were always evaluated at full precision.
Delphi came along and did something which could make sense in certain
circumstances, and I agree it's probably an improvement, but it's being
implemented slightly incorrectly. The stipulation that precision is reduced
to the lowest precision that doesn't cause data loss is not being followed,
we are getting data loss.
I don't know if Delphi got it right, because I don't have Delphi, but there
seems to be a real bug here. We should not be getting Data Loss, and things
I have done for decades should still work the same as they used to, but they
are defiantly not working the way they used to.
Below is the program I tested with, With TP7, I never get a reduction in
precision. 1440 and 1440.0 yield the same results, full precision
evaluation of constants, and the result is never 8427.0224... Unless I put
it into a variable defined as a single. Even then, there is never any
difference between AA, BB, or CC for any given variable type.
I could not cut and paste the results from TP7 because I am running it in
DosBox, and I can't copy and paste from the DosBox window, but I put the
screenshot as an attachment. I don't know if images will go through the
mailing list, if not, I'll put it somewhere and link it.
My argument is that I never needed to cast my constants before to avoid a
loss in precision, it never even crossed my mind to think I would have to do
that, it's not the way Pascal works. In Turbo Pascal if I do
Writeln(33/1440); and Writeln(33/1440.0); I get the exact same thing. In
FPC I get very different results. I didn't need to cast the 1440.0 in Turbo
Pascal to prevent a loss of data, it's not necessary.
It may be the way Delphi works, but I doubt Delphi has the loss of precision
bug. 90% of my code was ported over from Turbo Pascal, I skipped Delphi and
didn't make a windows version until FPC, and I have a LOT of units with
{$Mode TP} that still have my original code. I don't know if Turbo Pascal
reduced the precision of constants to gain performance, but I do know it was
never a problem, I never had a reduction in precision.
I think it's great to make improvement like this, and I'm all for gaining
performance, but it needs to be implemented the way it was intended, and it
needs to work, there should be no loss of precision, and I should not have
to do something special to prevent the loss of precision, it should be
precise by default.
James
program TESTDBL1 ;
Const
AA = 8427+33/1440.0;
BB = 8427+33/1440;
CC = 8427.0229166666666666666666666667;
Var
A_Ext : Extended;
B_Ext : Extended;
C_Ext : Extended;
A_Dbl : Double;
B_Dbl : Double;
C_Dbl : Double;
A_Sgl : Single;
B_Sgl : Single;
C_Sgl : Single;
begin
A_Ext := AA;
B_Ext := BB;
C_Ext := CC;
A_Dbl := AA;
B_Dbl := BB;
C_Dbl := CC;
A_Sgl := AA;
B_Sgl := BB;
C_Sgl := CC;
WRITELN ( 'A_Ext = ',A_Ext: 20 : 20 ) ;
WRITELN ( 'B_Ext = ',B_Ext: 20 : 20 ) ;
WRITELN ( 'C_Ext = ',C_Ext: 20 : 20 ) ;
WRITELN;
WRITELN ( 'A_Dbl = ',A_Dbl: 20 : 20 ) ;
WRITELN ( 'B_Dbl = ',B_Dbl: 20 : 20 ) ;
WRITELN ( 'C_Dbl = ',C_Dbl: 20 : 20 ) ;
WRITELN;
WRITELN ( 'A_Sgl = ',A_Sgl: 20 : 20 ) ;
WRITELN ( 'B_Sgl = ',B_Sgl: 20 : 20 ) ;
WRITELN ( 'C_Sgl = ',C_Sgl: 20 : 20 ) ; end.
More information about the fpc-pascal
mailing list