[fpc-devel] Blackfin support
DrDiettrich1 at aol.com
Wed Jul 14 14:35:23 CEST 2010
Marco van de Voort schrieb:
> In our previous episode, Michael Schnell said:
>> On 07/14/2010 12:00 AM, Hans-Peter Diettrich wrote:
>>> One of these issues are memory mapped files, that can speed up file
>>> access a lot (I've been told), perhaps because it maps directly to the
>>> system file cache?
>> AFAIK File Mapping is used a lot and very successfully with Linux, but
>> it _is_ available with NTFS. No idea if, here, the implementation is
>> done in a way this it's really fast.
> I've tried it long ago in win2000, and maybe even XP. If you linearly access
> the files with a large enough blocksize (8 or 16kb), it was hardly
> measurable. (+/- 300MB files).
> Probably if you go linearly, the readahead is already near efficient.
Windows offers certain file attributes for that purpose, that notify the
OS of intended (strictly) sequential file reads - what would allow to
read-ahead more file content into the system cache.
> Mapping does not change that picture (the head still has to move if you
> access a previously unread block). Mapping mainly is more about
> - zero-copy access to file content
> - and uses the VM system to cache _already accessed_ blocks.
- and backs up RAM pages by the original file, they never will end up in
the swap file.
> The whole file hypothesis could be easily testable (if applies at all) by
> increasing the buffersize. But if I understand finput.pas properly, FPC
> already uses a 64k buffersize (which is larger than most sourcefiles), so I
> don't expect much gain here.
I see the biggest benefit in many possible optimization in the scanner
and parser, which can be implemented *only if* an entire file resides in
memory. When memory management and (string) copies really are as
expensive as some people say, then these *additional* optimizations
should give the really achievable speed gain.
IMO we should give these additional optimziations an try, independent
from the use of MMF. When an entire source file is loaded into memory,
we can measure the time between reading the first token and hitting EOF
in the parser, eliminating all uncertain MMF/file cache timing.
It's only a matter of the acceptance of such a refactored model, since
it's a waste of time when it never will become part of the trunk, for
already known reasons.
More information about the fpc-devel