[fpc-devel] Is calling the Windows Unicode APIs really faster than the ANSI API's?
Marco van de Voort
marcov at stack.nl
Fri Sep 26 10:01:09 CEST 2008
In our previous episode, Aleksa Todorovic said:
> > I suppose it would be viable doing timing results for saving text
> > files as well. After all, 99% of the time, text files are stored in
> > UTF-8. So in D2009 you would first have to convert UTF-16 to UTF-8 and
> > then save. And the opposite when reading, plus checking for the byte
> > order marker. If you used UTF-8 for the String encoding no
> > conversions are required and no byte order marker checks needed.
>
> That is true. But, on the other hand, 99% of your time, your
> application will work with string in memory, and only 1% of time will
> be spend on I/O.
This is not true. Working with Database exports (simple transformations,
pump functionality) is a quite normal task for a programmer.
> I support decision of using UTF-16 over UTF-8. String processing is
> far more simpler, it's actually as simple as it should be. Have you
> ever done any serious processing using UTF-8? It's not nightmare, but
> it's surely real pain. No such problems with UTF-16.
It's no different then UTF-16 if you want to do it properly. In both you
have to look out for surrogates.
All also note that there hasn't been a final decision about UTF-16 only. The
original idea was to have a multi encoding string, but that got stricken
because Tiburon reality crashed in.
Tiburon actually also does this, it has a way of dealing with UTF-8
automated too.
IMHO any system should allow to generally work with strings in the native
encoding. Which means UTF-8 on *nix.
More information about the fpc-devel
mailing list