[fpc-devel] Blackfin support

Marco van de Voort marcov at stack.nl
Wed Jul 14 12:01:26 CEST 2010


In our previous episode, Michael Schnell said:
>   On 07/14/2010 12:00 AM, Hans-Peter Diettrich wrote:
> > One of these issues are memory mapped files, that can speed up file 
> > access a lot (I've been told), perhaps because it maps directly to the 
> > system file cache?
> AFAIK File Mapping is used a lot and very successfully with Linux, but 
> it _is_ available with NTFS. No idea if, here, the implementation is 
> done in a way this it's really fast.

I've tried it long ago in win2000, and maybe even XP. If you linearly access
the files with a large enough blocksize (8 or 16kb), it was hardly
measurable.  (+/- 300MB files).

Probably if you go linearly, the readahead is already near efficient. 

But FPC might not adhere to this scheme, I don't know if FPC currently loads
the whole file or leaves the file open while it processes e.g.  a .inc.

If it doesn't load the whole file, opening others triggers head movement
(if not in cache) that could be avoided.

Mapping does not change that picture (the head still has to move if you
access a previously unread block).  Mapping mainly is more about 
- zero-copy access to file content
- and uses the VM system to cache _already accessed_ blocks.

The compiler does not do enough I/O to make the first worthwhile, the second
is irrelevant to the compiler;s access pattern.

The only way it could matter if the memory mapped file reads more
sectors speculatively after a page access, but I don't know if that is the
case, it might be as well be less. (since normal File I/O is more likely to
be linear)

So in summary, I think _maybe_ reading the whole file always might win a bit
in filereading performance. I don't expect memory mapping to do so. 

The whole file hypothesis could be easily testable (if applies at all) by
increasing the buffersize. But if I understand finput.pas properly, FPC
already uses a 64k buffersize (which is larger than most sourcefiles), so I
don't expect much gain here.

And, worse, I think that even if that results in a  gain is dwarfed by
directory operations (searching files, creating new files) and binary
startup time.  (of compiler but also other tools). 

(*) empirical time for a core2 to move a large block. (source+dest >cache)



More information about the fpc-devel mailing list