[fpc-devel] An interesting thought... AI
charlie at scenergy.dfmk.hu
Fri Nov 11 00:08:00 CET 2022
On Thu, 10 Nov 2022, Sven Barth via fpc-devel wrote:
> You still need to feed the model with the necessary rules and with
> necessary training data of both correct and incorrect approaches.
> But even then *I* wouldn't want to have any of that black box mambo
> jumbo in FPC, cause when a bug occurs in the optimizations due to some
> decision the model made... well... tough luck.
Well, a few years back I read a paper (maybe it was linked before in the
thread no idea), that embraced this, and used a genetic algorithm for
optimization, basically, took the default compiler output, and just
started to change instructions randomly to see if it performed better...
99,99999% cases it crashed, but eventually there was a version that
performed better. So for the next iteration that was taken as baseline.
(Of course, with the necessary state-verifications surrounding the code.)
Now, I could still see a billion ways this could fall apart (what if you
change input data, so you have to combine it with some fuzzing too for
sure?), but it was an interesting idea nevertheless. And of course,
Anyway, until a simple memory layout change and pure chances of caching in
a live system change can cause huge differences in code performance,
sometimes bigger changes than the optimizer can make(!) the entire
excercise is largely pointless, IMO. As much as one of course don't want
to generate total junk as code, in real world applications there are
usually bigger wins to be had just by improving the used algorithms.
(I've seen a talk about this once, where they basically measured GCC's
optimizer against caching and code layout changes, and the optimizer's
effect was mostly within measurement error, apart from a few big things
like register variables... Which is nuts.)
More information about the fpc-devel