<!DOCTYPE html>
<html>
<head>
<meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
</head>
<body>
<p>I didn't follow all the discussions on this topic and all the
details of compiler options of FPC <br>
and Delphi compatibility and so on, but I'd like to comment on
this result: <br>
</p>
<pre class="moz-quote-pre" wrap="">
program TESTDBL1 ;
Const
HH = 8427.0229166666666666666666666667;
Var
AA : Integer;
BB : Byte;
CC : Single;
DD : Single;
EE : Double;
FF : Extended;
GG : Extended;
begin
AA := 8427;
BB := 33;
CC := 1440.0;
DD := AA+BB/CC;
EE := AA+BB/CC;
FF := AA+BB/CC;
GG := 8427+33/1440.0;
WRITELN ( 'DD = ',DD: 20 : 20 ) ;
WRITELN ( 'EE = ',FF: 20 : 20 ) ;
WRITELN ( 'FF = ',FF: 20 : 20 ) ;
WRITELN ( 'GG = ',GG: 20 : 20 ) ;
WRITELN ( 'HH = ',HH: 20 : 20 ) ;
end.
result:
DD = 8427.02246100000000000000
EE = 8427.02291666666666625000
FF = 8427.02291666666666625000
GG = 8427.02246093750000000000
HH = 8427.02291666666666625000
</pre>
<p></p>
<p><br>
IMO, the computations of AA+BB/CC (right hand side) should be
carried out the same way, regardless of the type <br>
on the left hand side of the assignment. So I would expect the
values in DD, EE and FF being the same.<br>
</p>
<p>But as it seems, the left hand side (and the type of the target
variable) HAS AN INFLUENCE on the computation <br>
on the right hand side, and so we get (for example) <br>
</p>
<pre class="moz-quote-pre" wrap="">DD = 8427.02246100000000000000
</pre>
<p>and <br>
</p>
<pre class="moz-quote-pre" wrap="">EE = 8427.02291666666666625000
</pre>
<p>which IMHO is plain wrong. <br>
</p>
<p>If all computations of AA+BB/CC would be carried out involving
only single precision, <br>
all results DD, EE, FF (maybe not GG) should be 8427.0224... <br>
only minor differences because of the different precisions of the
target variables<br>
(but not as large as the difference between DD and EE above). <br>
<br>
This would be OK IMHO; <br>
it would be easy to explain to everyone the reduced precision on
these computations <br>
as a consequence of the types of the operands involved.<br>
</p>
<p>Another question, which should be answered separately: <br>
</p>
<p>the compiler apparently assigns types to FP constants. <br>
It does so depending on the fact if a certain decimal
representation can exactly be represented <br>
in the FP format or not. <br>
</p>
<p>1440.0 and 1440.5 can be represented as single precision, so the
FP type single is assigned <br>
1440.1 cannot, because 0.1 is an unlimited sequence of hex digits,
so (I guess), the biggest available FP type is assigned <br>
1440.25 probably can, so type single is assigned <br>
1440.3: biggest FP type<br>
1440.375: probably single <br>
</p>
<p>and so on</p>
<p>Now: who is supposed to know for any given decimal representation
of a FP constant, if it can or cannot <br>
be represented in a single precision FP variable? This depends on
the length of the decimal representation, <br>
among other facts ... and the fraction part has to be a multiple
of negative powers of 2 etc. etc. <br>
</p>
<p>That said: wouldn't it make more sense to give EVERY FP CONSTANT
the FP type with the best available precision? <br>
</p>
<p>If the compiler did this, the problems which arise here could be
solved, I think. <br>
<br>
GG in this case would have the same value as HH, because the
computation involving the constants <br>
(hopefully done by the compiler) would be done with the best
available precision. <br>
</p>
<p>HTH, kind regards</p>
<p>Bernd <br>
</p>
<p><br>
</p>
<div class="moz-cite-prefix">Am 06.02.2024 um 16:23 schrieb James
Richters via fpc-pascal:<br>
</div>
<blockquote type="cite"
cite="mid:112601da5910$776ad5d0$66408170$@productionautomation.net">
<pre>program TESTDBL1 ;
Const
HH = 8427.0229166666666666666666666667;
Var
AA : Integer;
BB : Byte;
CC : Single;
DD : Single;
EE : Double;
FF : Extended;
GG : Extended;
begin
AA := 8427;
BB := 33;
CC := 1440.0;
DD := AA+BB/CC;
EE := AA+BB/CC;
FF := AA+BB/CC;
GG := 8427+33/1440.0;
WRITELN ( 'DD = ',DD: 20 : 20 ) ;
WRITELN ( 'EE = ',FF: 20 : 20 ) ;
WRITELN ( 'FF = ',FF: 20 : 20 ) ;
WRITELN ( 'GG = ',GG: 20 : 20 ) ;
WRITELN ( 'HH = ',HH: 20 : 20 ) ;
end.
When I do the division of a byte by a single and store it in an extended, I
get the division carried out as an extended.
FF, GG, and HH should all be exactly the same if there is not a bug.
But:
DD = 8427.02246100000000000000
EE = 8427.02291666666666625000
FF = 8427.02291666666666625000
GG = 8427.02246093750000000000
HH = 8427.02291666666666625000
</pre>
</blockquote>
</body>
</html>