Sorting it all Out Michael Kaplan's random stuff of dubious value Be sure to read the disclaimer here first!
I thought I would explain a bit more about how surrogates work in Unicode, since it does not seem very well described in a whole lot of places. First, some definitions (all from the Unicode Glossary and the Unicode Roadmap sites):
Ok, it is all as clear as mud now, right? :-)
The problem is that even if the definitions are applied consistently, there is no good feel for exactly how they work, how high and low surrogates combine, and so on.
(Other questions, like why do high surrogates have lower numbers than low surrogates are covered in other posts)
Let's see if we can't do something about that....
(Warning: some MATH content ahead!)
We start with the Basic Multilingual Plane -- it is the code units from U+0000 to U+FFFF. Some of these code points are assigned; and a large subset of those are assigned characters. In all there are 65,536 code units in this and every other plane; you can also think of this as 1000016 or just 216 code units. Whatever you find easiest, conceptually.
Now what happens with those high surrogate code points is that the block of 1024 of them are divided into 16 blocks of 64 each. And each one of those blocks is used for a plane:
By convention, U+[##]FFFE and U+[##]FFFF of each plane are set aside and reserved, never to be assigned. This allows internal processes to use them as sentinels. Note that they should never be interchanged with any other process!
Now the way things are numbered, each high surrogate is used, serially, combining with every possible one of the 1024 low surrogates before moving onto the next high surrogate. Thus for supplementary characters you see the following order:
(I skipped some spaces in there for obvious reasons!)
This mechanism allows for many things such as simple range checking and easy conversions between code point an surrogate pair (it is a simple algorithmic macro to do the conversion when/if it is ever needed).
When combined with the way that scripts are assigned in blocks, it is easy to notice things like the following (not a complete list, just a sample!):
So when you combine the BMP's 216 code units with the 16 planes of 64 * 1024 (which is also 216 code units!), you get 17 * 216 or 1,114,112 code units in total -- which is where that interestingly arbitrary-looking number comes from!
Unicode's Roadmap site has a lot of information about the potential placement of future character allocations in Unicode, for those who are interested.
And for a more reality-based set of links, if you look ahead to Windows Vista three macros have been added to the winnls.h that comes with the Vista SDK:
I would expect that the meanings are pretty self-explanatory, but if not you can look at the VSDK topics to which I linked. :-)
(On a side note, I find it very cool that the Windows Vista SDK is available right now to everyone, whether they are on the Vista beta or not. It really does help to explain features and functions!)
Now in future posts I could perhaps get into other topics, like algorithmic conversion between UTF-16 and UTF-32....
This post brought to you by all of the supplementary planes of Unicode
The other day, in response to my post How are the file names encoded? , Matt Selz commented : If NTFS
(This could probably get turned into a series with various terms....) A very common question that comes
Sometimes I see a documentation topic that bothers me a little bit. And then occasionally I'll see one