<div dir="ltr"><div dir="ltr">On Sat, May 4, 2019 at 5:31 PM J. Gareth Moreton <<a href="mailto:gareth@moreton-family.com">gareth@moreton-family.com</a>> wrote:<br></div><div class="gmail_quote"><blockquote class="gmail_quote" style="margin:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex">
I figure data-flow analysis will address everything eventually, but I <br>
can't promise the compiler won't become very inefficient as a result.<br></blockquote><div><br></div><div>I mean no offense by this, and note that I think you've done more for FPC in the last year (or two years-ish maybe?) than many people ever have, but I must say that to me this kind of thinking is very wrong and a big, specific part of what holds FPC back. It's as though people constantly forget that *not everything* needs to be some immutable hardcoded part of the overall compilation process, permanently affecting everyone until the end of time. In reality you can implement *literally anything at all*, and even if it takes seven hours to run, as long as it is an opt-in optimization enabled via a command-line argument or compiler directive there's no logical reason for anyone to complain about it because, again, it's *opt-in.*.</div><div><br></div><div>And don't even get me started on the subject of "people who care in any way whatsoever about the size of the FPC executable, even though they are running desktop hardware and a desktop OS, *not* embedded hardware with an embedded OS, and so it could not possibly matter in *any* practical way to them even if the FPC executable was 25x larger than it currently is."</div></div></div>