<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN">
<html>
<head>
<meta content="text/html;charset=UTF-8" http-equiv="Content-Type">
</head>
<body bgcolor="#ffffff" text="#000000">
<br>
<blockquote
cite="mid:cfcebe780809260150o55e06f3cpe8184e9afed78e00@mail.gmail.com"
type="cite">
<blockquote type="cite">
<pre wrap="">Is UTF-16 Widestring in FPC (and Delphi 200x ? ) not done just ignoring the
surrogates ?
</pre>
</blockquote>
<pre wrap=""><!---->
Lets hope not, </pre>
</blockquote>
I don't think, full UTF-16 really would be desirable desirable over
UC-2. <br>
<br>
Imagine you have a string of some million characters (e.g. a Book). All
functions that need to find the n-th character (like x[n], copy, ...)
would take forever, as they need to scan the complete string (if not
widestring is a rather complex tree-like format). <br>
<br>
-Michael<br>
</body>
</html>