Unicode

From Just Solve the File Format Problem
Revision as of 01:56, 8 November 2013 by Dan Tobias (Talk | contribs)

Jump to: navigation, search
File Format
Name Unicode
Ontology
Released 1991

Unicode is a standard character set: an assignment of numeric values to characters. A huge number of characters from various writing systems (modern or ancient), as well as special symbols of many types, are each given a number. It was devised beginning in 1987, with the first version published in 1991. Subsequent revisions have continually expanded its character repertoire.

Unicode was developed in reaction to the unwieldy multiplicity of character sets that had arisen to include various subsets of the many characters left out of the English-centric ASCII set. It has been successful to the point where just about all technical standards dealing with characters now are defined in terms of Unicode, with even the older proprietary encodings cross-referenced to the Unicode characters they encode.

The Unicode character set is defined in ISO-10646. The Unicode standard takes the character set from ISO-10646, and adds standard algorithms and rules for how to use it. For example, it defines rules about character composition with separate diacritical elements and left-to-right vs. right-to-left character positioning, so things can get a bit more complex than just converting a series of numbers into characters.

The term character is ambiguous, and Unicode encodes many things that are arguably not characters, so the term code point is often used instead. Code point technically refers to the numeric value, but in practice it also refers to the entity encoded by that value.

The standard way to denote a Unicode code point is to prefix it with "U+", and write the number in hexadecimal, with a minimum of four hex digits. For example, code point 42 is written as U+002A, and code point 1,114,109 is U+10FFFD.

Each code point is also assigned a human-readable name, which may be written after the "U+" notation. For example, you might see "U+002A ASTERISK" or "U+03A9 GREEK CAPITAL LETTER OMEGA".

Contents

Details

Early versions of Unicode attempted to be a 16-bit character encoding where characters in a potential repertoire of 65,536 code points could be represented as 16-bit (2-byte) unsigned integers. The "big-endian vs. little-endian" problem caused there to be two possible byte streams corresponding to a particular document, but the Byte Order Mark character could be used to distinguish them.

Later versions of Unicode expanded the potential number of code points (to a range of 0 to 1,114,111), so that even 16 bits weren't enough to encode all possible characters.

Unicode is sometimes described as consisting of 17 planes of 65,536 code points each, with plane 0 ranging from U+0000 to U+FFFF, plane 1 ranging from U+10000 to U+1FFFF, and so on. Plane 0 is known as the Basic Multilingual Plane or BMP, and an attempt is made to place the most important characters in it.

The first 128 Unicode code points, 0-127, correspond to the same code points in ASCII (including both printable characters and the C0 controls). The next 128, 128-255, correspond to the same points in ISO 8859-1 (including the C1 controls), which in turn contains the same characters at 0-127 as ASCII, so the entire first 256 characters in Unicode are equivalent to that standard.

Encodings

Once numbers are assigned to characters, they can be encoded as sequences of bytes in various ways, as defined in the specifications of particular character encodings.

The most common Unicode encodings are UTF-8, UTF-16, and UTF-32. See Character Encodings for a longer list.

There is no encoding named simply "Unicode". If a format specification says that text is encoded in "Unicode", it probably means UTF-16 or UCS-2. If the document is related to Microsoft Windows, it probably means UTF-16LE.

Notes

And if you think Unicode is full of crap, you've got some support with this character (U+1F4A9, "Pile of Poo").

References

Personal tools
Namespaces

Variants
Actions
Navigation
Toolbox