[fpc-devel] Blackfin support
Marco van de Voort
marcov at stack.nl
Wed Jul 14 15:44:21 CEST 2010
In our previous episode, Hans-Peter Diettrich said:
> >
> > Probably if you go linearly, the readahead is already near efficient.
>
> Windows offers certain file attributes for that purpose, that notify the
> OS of intended (strictly) sequential file reads - what would allow to
> read-ahead more file content into the system cache.
I can vaguely remember something like that too. It is a matter of hacking
that into the RTL, and then measure make cycle (requires a few reboots to
preclude caching)
> > Mapping does not change that picture (the head still has to move if you
> > access a previously unread block). Mapping mainly is more about
> > - zero-copy access to file content
> > - and uses the VM system to cache _already accessed_ blocks.
> - and backs up RAM pages by the original file, they never will end up in
> the swap file.
If swapping enters the picture, then all these savings are peanuts, so we
assume that is absent.
> > The whole file hypothesis could be easily testable (if applies at all) by
> > increasing the buffersize. But if I understand finput.pas properly, FPC
> > already uses a 64k buffersize (which is larger than most sourcefiles), so I
> > don't expect much gain here.
>
> I see the biggest benefit in many possible optimization in the scanner
> and parser, which can be implemented *only if* an entire file resides in
> memory. When memory management and (string) copies really are as
> expensive as some people say, then these *additional* optimizations
> should give the really achievable speed gain.
That's easily said, but often when you enter the details, you have to often
make compromises. And sacrifice speed.
> IMO we should give these additional optimziations an try, independent
> from the use of MMF. When an entire source file is loaded into memory,
> we can measure the time between reading the first token and hitting EOF
> in the parser, eliminating all uncertain MMF/file cache timing.
>
> It's only a matter of the acceptance of such a refactored model, since
> it's a waste of time when it never will become part of the trunk, for
> already known reasons.
I don't think we ever going to give an up front carte blanche for a massive
rewrite to go into trunk. That is simply not sane.
A subsmission will always be judged on performance and maintainability
before being admitted.
If this bothers you, try to find smart ways to phase the changes, and limit
yourself to a few things at a time, and don't try to speedoptimize I/O, change
parser, allow multiple frontends etc, all at the same time.
More information about the fpc-devel
mailing list