Talk:Unicode

This paragraph was added to the start of the article:

Unicode is a standard used in computer software for encoding human readable characters in digital form. The most common encoding is the ASCII code, which can encode a maximum of 127 characters, which is enough for the English language. As computer use spread to other languages, the shortcomings of ASCII became more and more apparent. There are many other languages with many other characters; Asian languages in particular contain many, many characters.

I removed it because it is inaccurate (It overplays Unicode-as-a-standard rather than Unicode as a consortium that produces lots of standards), confusing (its mention of ASCII is not clearly historical), and adds no information that isn't already in the article. I assume, however, that it was added because someone thought the existing first paragraph was unclear, so I'm open to suggestions about how to improve it. --Lee Daniel Crocker

Contents

Downloads

I am not a techie! Nevertheless I can see the usefulness of much of the material available in Unicode. Neither am I the sort of anti-techie that complains that anything in other than plain-Jane unaccented English alphabetical characters must be thrown out of Wikipedia, or that articles should not be displaying meaningless question marks. I was visiting the chess page, and someone there has made a valiant effort to produce diagrams of how the pieces move by using only ordinary keyboard characters. I'm sure that he would not take it as a sign of disrespect when I say that it looks like shit.

I see no such chart there. Evertype 15:43, 2004 Jun 20 (UTC)
That's because they've been removed. They were there when the comment was posted in March 2002. --Zundark 16:11, 20 Jun 2004 (UTC)

I'm sure that most of us would like to see the special symbols, letters, or chinese characters at the appropriate time and place. At the same time I understand that for many Wikipedians there are technical reasons which prevent their hardware from dealing with this material (eg. limited memory). Then there are others for whom only the appropriate software is missing. Even some of the people with hardware restrictions may be able to handle Greek or Russian, though probably not Chinese. In cases where I've tried to find the code, I've ended up wading through reams of technical discussions. These discussions may be very interesting, but they don't provide a solution to my immediate problem.

The practical suggestion may be a notice at the head of any article containing symbols not in ISO 8859-1 saying in effect. "This article contains non-standard characters. You may download these characters by activating this LINK". Eclecticology

Just because an HTML document contains characters that are not in the ISO 8859-1 range doesn't mean that the characters are nonstandard. HTML 4.0 allows nearly all of Unicode to be used in a document, and all web browsers make an attempt to handle any character they encounter. The problem is merely that the underlying operating systems upon which the browsers rely to provide character rendering tends to be either not Unicode aware or just does not have a good selection of fonts (character-to-glyph mappings) installed.
There's no reliable way to guess the user's character rendering capabilities, so we really don't know when to tell people when it would be a good idea to download font files, and fonts tend to be OS-specific anyway. I prefer just to acknowledge in the prose that any non-ASCII characters may or may not render as they are *supposed* to. I don't think we should dumb down the HTML and avoid those characters though. - mjb 18:21 Feb 20, 2003 (UTC)

In cases where I've tried to find the code, ...

What exactly were you looking for ? Do you have the Unicode value, and you're looking for a typical glyph (like a ASCII chart) ? Are you looking for the Unicode value ?

These discussions may be very interesting, but they don't provide a solution to my immediate problem.

What exactly is your "immediate problem" ?


Is there a reason to use '<code>foo</code> <code>bar</code> <code>baz</code> ...' instead of '<code>foo bar baz ...</code>'? -- Miciah

UTF-7

Isn't there a UTF-7? Or is an invetion of Microsoft (it's in .NET)? CGS 21:54, 16 Sep 2003 (UTC).

Yes[1] (http://czyborra.com/utf/#UTF-7), but it's virtually never used. --Brion 23:33, 16 Sep 2003 (UTC)

The oldest of Unicode's encodings is UTF-16, a variable-length encoding that uses either one or two 16-bit words, manifesting on most platforms as 2 or 4 8-bit bytes, for each character. {NB: This can't be true; UCS-2 has to predate UTF-16!}

66.44.102.169 wrote "{NB: This can't be true; UCS-2 has to predate UTF-16!}" in the article. UTF-16 was previously UCS-2 but I'm not sure that makes the statement untrue as such but I reworded it anyway. Angela.

In the way the terminology today is used, UCS-2 doesn't have surrogate support, and certainly a 16-bit encoding without surrogate support existed before one with it. I don't think either of these were called UCS-2 and UTF-16 at the time though. Morwen 11:52, 6 Dec 2003 (UTC)
I wrote the comment that UTF-16 can't be the oldest encoding. Currently text may be encoded in UCS-2 or it may be encoded in UTF-16. Many Windows application designers pay no heed to the difference, but by their assumptions clearly support UCS-2 and not UTF-16. I speak of the MS-Windows world, wherein UCS-2LE holds dominant sway. In fact, Microsoft documents very commonly use the term "Unicode" as a synonym for UCS-2LE. Anyway, I meant that in the current time, we have both UCS-2 encodings and UTF-16 encodings, and I suspect we will all agree that the UCS-2 encodings (by whatever name) predate the UTF-16 encodings. :)

A Brain Dropping follows:

Wasn't Unicode created to encode all languages - not just 'human' languages? In the future then, why couldn't Unicode conceivably be used to encode extraterrestrial languages as well?(well, why not? hehehehe) Therefore, shouldn't the 'human' be removed from this page? One possible alternative: Unicode is the international standard created, whose goal is to specify a code matching every character needed by every known written language to a single unique integer number, called a code point.

The Universal Character Set, whether in its Unicode Standard or its ISO/IEC 10646 manifestation, was made to encode the writing systems of the world, not the languages of the world. Evertype 15:43, 2004 Jun 20 (UTC)
I'm not an expert on this, but I don't believe the Unicode Consortium seeks to encode writing systems whose existence we don't yet know of (and if we ever meet aliens who use more than 232 characters, Unicode will have a problem). I believe that Tolkein's Elvish scripts are in there, but other fictional scripts like Klingon are not. So they're not being wholly anthropocentric. adamrice 00:02, 11 Jul 2004 (UTC)
Tengwar and Cirth are not yet encoded, but are roadmapped for encoding. To answer the brain-dropping: Were we to meet aliens who had an encodable writing system, it is likely that their characters would fit. Evertype 11:59, 2004 Jul 11 (UTC)
I find that statement a little overreaching. The aliens could easily have a writing system with a million symbols, or several Chinese-size writing systems or a history of writing that dates back millions of years instead five or six thousand. Or simply be a group of ten different species of aliens with writing histories as complex as ours. The best we can say is that humans, all told, will use about 3 planes, and there's 17 planes of characters. --Prosfilaes 03:49, 12 Jul 2004 (UTC)
There is currently no Klingon alphabet/writing system that is suitable for encoding. The glyphs shown in Star Trek are merely a nearly-1:1-mapping of the Latin alphabet. The Star Trek folks seem to have a real Klingon alphabet, but have not yet published it. JensMueller 10:06, 5 Sep 2004 (UTC)
Any alphabet is going to have a nearly-1:1 mapping to the phonemes of the language, and hence to the Latin transcription. What must a "real" Klingon alphabet be ill-fitted to the Klingon language? --Prosfilaes 04:11, 9 Sep 2004 (UTC)
There has been no sufficient klingon alphabet published, and nearly all klingon users online are using a "roman transcription". The reason the klingon encoding proposal was turned down was because it was hardly used, (and also because the sounds were mapped to english sounds, rather than klingon). If a canonical klingon alphabet would appear, I guess a unicode encoding is likely. Klingon seems to have 26 sounds, according to Wiki article and www.kli.org, which shouldn't be too difficult to find a mapping area for.
The Klingon piqaD alphabet has a mapping in the Private Use Area of Unicode, and has recently come into occasional use on the Internet. See, for example, Chatting in piqaD (http://www.kli.org/wiki/index.php?Chatting%20in%20pIqaD) and qurgh's blog (http://qurgh.blogspot.com/) The requirement for getting Klingon piqaD an assignment of regular Unicode code points is some level of use in data interchange. We can expect that it will qualify at some time in the future. --Cherlin

AAT

Can someone who knows who AAT are add to the AAT disambiguation page appropriately and also send the link on this page to the right place? Thanks EddEdmondson 08:59, 19 Jun 2004 (UTC)

Done. (AAT is Apple Advanced Typography, but we have no article on it at present.) --Zundark 09:40, 19 Jun 2004 (UTC)

Perhaps for "Issues?"

One of classicists' issues with Unicode has been the omission of the LATIN Y WITH MACRON characters. While the omission has been corrected in Unicode 3, most user agents don't know to render anything for that codepoint. Somewhere in that story is an issue that perhaps might make sense in the article -- either the omission of the letter, or the outdated support available by user agents (I don't see Microsoft rushing to update its fonts and packaging them as an update to Windows or Internet Explorer just to comply with recent standards).

Definitely not. This is not an "issue" with Unicode, but with implementation. We add support for classicists constantly, and it wasn't Y WITH MACRON alone either. Evertype 21:49, 2004 Jul 10 (UTC)

UTF-8 as the basis for multilingual text?

"UNIX-like operating systems such as GNU/Linux, BSD and Mac OS X have adopted Unicode, more specifically UTF-8, as the basis of representation of multilingual text."

Mac OS X stores a lot of text in UTF-8, but the other UTF's are also supported throughout the system and widely used. I agree that UTf-8 is currently the most widely used Unicode encoding (because it is the most legacy-compatible encoding), and that is important enough to mention in the leading section, but perhaps it should be rephrased so that it doesn't mislead the reader into believing those OSes don't support other kinds of Unicode? — David Remahl 04:16, 9 Sep 2004 (UTC)

Section 0

I think section 0 which currently reads could be improved by changing: In computing, Unicode is the international standard whose goal is to specify a code matching every character needed by every written human language, including many dead languages in small scholarly use, to a single unique integer number, called a code point.

To: In computing, Unicode is the international standard whose goal is to specify a code matching every character needed by every written human language, including many dead languages in small scholarly use such as foo and bar as well as [some other good example, perhaps a made-up language?], to a single unique integer number, called a code point.

I think the intro would be better by adding two examples there, furthermore i think is the international standard should be is an international standard, or has it been approved by a major authoraty as the standard? -- Ćvar Arnfjörđ Bjarmason (https://academickids.com:443/encyclopedia/index.php?title=User_talk:%C6var_Arnfj%F6r%F0_Bjarmason&action=edit&section=new) 17:44, 2004 Oct 5 (UTC)

The parts of Unicode which are also in ISO 10646 most likely define it as the standard, aslo given the fact that the maintaince of the ISO 8859 has been put into hibernation. Pjacobi 20:27, 5 Oct 2004 (UTC)


And, argueing about section 0, what about: in internationalization of software. A Thai programmer writing a program with Thai user interface for Thai customers doesn't fit at all the definition of internationalization. -- Pjacobi 20:30, 5 Oct 2004 (UTC)

The interesting thing for most people is that it provides a way to store text in any language in a computer. Starting off by mentioning "unique integer numbers" doesn't make Unicode easier to understand. Even as a computer programmer, I have a bit of trouble reading that sentence and understanding what it means. And it's not really true as given; characters in Unicode is a polite fiction. Many characters (Maltese "ie", Lakota p with bar above, many Khmer characters) are more then characters in Unicode-ese. Going to rewrite boldly. --Prosfilaes 21:47, 11 Oct 2004 (UTC)

Largest and most complete

This phrase has appeared recently without much discussion. "Unicode is the most complete character set, and one of the largest." Could anyone give justification? -- Taku 06:14, Oct 12, 2004 (UTC)

ISO/IEC 10646? Unicode reserves 1,114,112 (2^20 + 2^16) code points, and currently assigns characters to more than 96,000 of those code points. No other encoding even comes close. {Ανάριον} 09:09, 12 Oct 2004 (UTC)
As for the completeness, take a look at mapping of Unicode characters for all the scripts encoded. {Ανάριον} 09:10, 12 Oct 2004 (UTC)


GB18030 is by defintion as large as Unicode, but except for the pre-existing mappings, all GB18030 codepoints and Unicode codepoints, including yet unassigned ones, are algorithmically mapped. So, it is more like a strange encoding form of Unicode.
For certain scripts, there are character sets with more precomposed glyphs, e.g. VISCII for Vietnamese, TSCII for Tamil, or some scholarly encoding for pointed Hebrew. But they don't count as larger, as they don't support more than one or two scripts, and they don't count as more complete, as the encoded characters are uniquely representable in Unicode as sequences including combining characters.
So yes, according to all my knowledge and research, Unicode is the most complete and one of the largest character set, for which information is freely available in languages I can read.
If you have knowledge of implemented characters sets (not counting proposals, shich are cheap to made) which are more complete than Unicode, please elaborate.
Otherwise, I'll revert your reversal.
Pjacobi 09:12, 12 Oct 2004 (UTC)
Unicode is meant to be a superset of all known character sets, so it is hardly possible that there are character sets not covered by it. (And surely all VISCII and TSCII characters are included in Unicode as they are). — Monedula 11:11, 12 Oct 2004 (UTC)
Unicode is only a superset of the character sets in use when they started out. In practice, they were a superset of most new character sets up to 2000, at which they stopped encoding new precomposed characters. (So they're still a superset of them, in a practical sense.) One of the Chinese standards that encoded every minor variation on the ideograph seen wasn't added to Unicode, which decided to adopt a most unifing encoding policy, but all the Chinese and Japanese standards in widespread use are subsets. --Prosfilaes 21:53, 12 Oct 2004 (UTC)
I agree with your first half-sentence, but Unicode has decided to not encode any more precomposed characters. Only for pre-existing national and international standards there was a consensus to include the precomposed characters. Now, new suggestions for precomposed characters are routinely declined, and for good reasons. In fact it is hoped that in some future version (6.0?) all exsiting precomposed characters will become deprecated. A like case exist for glyph variant. What got in, is in, but new additions will be declined. See also: http://www.unicode.org/standard/where/
So there is no chance (neither is there a necessity) that TSCII codepoint 0xE0 "tU" will be assigned a single codepoint in Unicode. Instead it transcodes as [U+0ba4 U+0bc2] --Pjacobi 12:37, 12 Oct 2004 (UTC)
Ignoring the whole Han ideographs, and ignoring the sets that are basically new encodings of Unicode, what is there? TRON? --Prosfilaes 21:53, 12 Oct 2004 (UTC)
The idea of switching to several character encodings isn't unique to TRON, it was already included in ISO-2022. And both have in common, that it makes implementation difficults scatters the design process of new script encodings instead of unifiing it. I don't think much of it is still in use. Heck, you can't even use both (TRON and full ISO-2022 with escape switching) on the Web or in e-Mail. Most Unicode criticism on the TRON advocate's pages are just outdated or a result of misunderstandings. --Pjacobi 22:32, 12 Oct 2004 (UTC)

I am convinced that probably Unicode is the largest and most complete character set but can we still ignore criticism on unicode? What I am often heard about unicode, it is not inadequate in handling old text or text containing outdated characters. Maybe most of criticism are pointless or a result of misunderstandings but I still hear them and I don't think we should make a general statement which not everyone agrees with. Unicode is meant to be the largest and the most complete but if it is really so is disputed, if such dispute is nonsense in actuallity. -- Taku 22:41, Oct 12, 2004 (UTC)

Yes, of course the criticisms must be included, but we must try hard to find the right criticisms and the right way to present them. Don't forget the long expertise of Unicode in this field and the large number of field experts contributing to the evolving Unicode effort. We would achieve nothing but spoil the creditability of the Wikipedia, if we hastily add criticisms of mediocre quality.
A generic problem with Unicode is the long process it takes, to get additions done. This is the downside of centralism. And you need somebody with "weight" to get major additions and changes done. Either a national standards body are field experts of value.
A brainstorming list of criticisms:
  • Unicode got the Hebrew points for biblical texts all wrong (or something like that, I'm no expert)
  • Unicode has unified scripts (and requires different fonts and markup to differentiate), which should not have been unified.
  • Unicode has not unified scripts, which should have been unified, as they are only font differces.
  • Unicode has too few presentation forms for complex shaping scripts
  • Unicode has not enough presentation forms for complex shaping scripts
  • Unicode has too few precomposed glyphs
  • Unicode has not enough precomposed glyphs
As you can seen, some criticisms arise out of the fact, that decisions must be made in a standard on questions which are viewed differently by different people.
Pjacobi 00:32, 13 Oct 2004 (UTC)
If you want to say that Unicode is not the largest and most complete, then there must be something that's larger or more complete. If you tell us what it is, we can discuss it.
Most of the complaints about Unicode don't stem from size or completeness. Most of the scripts and characters that are left are very obscure and almost invariably not used for writing new material. The complaints come from how Unicode treats the existing scripts; often the question is whether two entities should be treated as distinct. Since in all these cases, they are distinct in some ways, and not in others, there's no "right" answer that will satisfy everyone. The Chinese and Japanese encodings that are supposedly more "complete" are in reality more fine-grained, in that they seperate characters that Unicode unifies. --Prosfilaes 20:01, 13 Oct 2004 (UTC)

Reorganization

I made some reorganization of the sections and the continuing work on the leading section. I think the new 4 big sections make good sense: origin and development, mapping and encoding, process and issues and in use. In addition to this, we probably need:

  • difference in character and glyph; we should give some example
  • difference in mapping and encoding; particularly, what is code point, what is plane?
  • short summary of utf; what is utf? and why we want it
  • size comparison, particularly what unicode not to include; perhaps Pjacobi is right that some criticism are wrong but it is still true that many people advertise their sets as being larger and more complete. We need some response to them.

If I have some time, I will try to address them but you can also help me. Finally, I'm sorry for late reply to unicode as largest and the most complete question. I slighly reworded the mention. Please make further edit if you think necessary. -- Taku 20:42, Oct 17, 2004 (UTC)

You must give some concrete examples who advertises which character set to be larger in what specific sense. --Pjacobi 22:18, 17 Oct 2004 (UTC)
ok, the press release of chokanji 3 [2] (http://www.chokanji.com/press/ck3/010116ck3press.html) (in Japanese) says it supports 170,000 kanji while Unicode handles 20,000 chinese characters, 12,000 of which are kanji. -- Taku 02:15, Oct 18, 2004 (UTC)
a) If I'm not mistaken, the press release is dated 2001-01-06. So, nearly four years later, are there any implementations? Can you give the URL of a single webpage in this charset? Is the IANA registration in progress? Does somebody work on GNU iconv support? Does somebody worl on IBM ICU support? You can't compare vaporware to a widely implemented standard.
b) As of version 4.0, Unicode supports 71,000 Han characters, it is a horribly outdated or mis-informed to state the number 20,000. And PRC is busily adding more. It is a political decision of JIS, not to propose adding more kanji. Either because JIS doesn't see the necessity or for other reasons.
Pjacobi 06:12, 18 Oct 2004 (UTC)
We don't compare. You wanted "some concrete examples who advertises which character set to be larger in what specific sense.". So this is the answer. Again and again, I didn't mean what they are saying so fair. I won't use their product because there is just so little compatibility and besides, I don't have any pratical problem with unicode. I mean I agree with you so I am not sure whom you try to convince. -- Taku 13:14, Oct 18, 2004 (UTC)

Thank you for giving the concrete example. Yes, I specifically asked for it. I apologize for replying in flame-war style. --Pjacobi 14:41, 18 Oct 2004 (UTC)

Many documents in non-western languages, for instance, are still represented in other character sets. Which languages? Which character sets In this generality it doesn't help. Please state languages and character sets used. And remember, GB18030 is now fully harmonized with Unicode and cannot be considered a different character set, but Unicode encoding form standardized by somebody other than ISO or Unicode Org, namely the Guobiao. --Pjacobi 22:23, 17 Oct 2004 (UTC)

Maybe Shift-JIS? I don't think it is only character set used beside Unicode. If you know more, that would help. -- Taku 02:15, Oct 18, 2004 (UTC)
The largest use of a non-Unicode charset is still EBCDIC, ASCII and ISO-8859-1, as seen on this Wiki. So this doesn't look like a west vs east problem to me. The difference is, that almost universally all other charsets are considered to be subsets of Unicode nowadays. And especially the HTML and XML character model explicetely states, that while the physical charset may vary, the logical charset is always Unicode. Also in programming, it is nearly always assumed, that everything can be converted (and most things reversibly) to Unicode.
So if I can judge this correctly, the Unicode character encoding models is only challenged by some users of Japanese and not much is known outside of Japan of this. As said above, I'm very skeptical about the practical relevance of the Unicode challengers. But the interesting point, why this happens in Japan, seems to be good stuff to write a separate article about Japanese character encoding.
Pjacobi 06:23, 18 Oct 2004 (UTC)
I had absolutely no intention to make a case like west vs east problem. If you think some sentences are problematic, then go ahead to edit. I just wanted to illustrate the adoptation of unicode and the sentence absolutely never mean to imply the use of unicode is problematic or anything. Besides, I am not sure what you are saying. I don't think you believe any non-unicode character sets have died out completely. We want to show when when unicode is used and when it is not. I mean what you want after all? -- Taku 13:14, Oct 18, 2004 (UTC)
Sorry for being unclear. And apologies for not contributing to the article itself in the moment. I am of the opinion some non-trivial additions are dearly needed (on the character model, on character vs glyphs vs graphemes), but I feel unable to do it myself. Perhaps I'll try it next week.
No, I surely don't want to say non-unicode character sets have died out completely. What I tried to say, is that the character encoding model of Unicode is nearly universal success and nowadays other character sets are mostly seen as subsets of Unicode. This wasn't the case ten years ago.
Pjacobi 14:41, 18 Oct 2004 (UTC)

It's fine. I was just puzzled about what upset you so much. As a matter of fact, I am neither the backer of unicode nor the detractor. I am only interested in making the article informative for those who have questions about unicode. It's very surprising that many people don't know well about unicode, even computer programmers. The article could be a help for them. -- Taku 15:54, Oct 23, 2004 (UTC)

Fully agree. When supporting charset issues (as I do sometimes for Firebird SQL) it's quite amazing that some programmers at first don't even see a problem in the different mappings between characters and bytes. --Pjacobi 17:55, 23 Oct 2004 (UTC)

Phishing

In the section that talks about pre-composed characters vs. composing with several codepoints, how about mentioning that this capability opens up lots of opportunities for phishing once URLs are more universally excepted in UTF-8? For example, once accented characters are common in website addresses, links with a pre-composed "č" and separate "e" plus an accent will point to different sites, but look identical to the user (in fact the intent is for them to look the same). I don't know if this info belongs here, but it's an interesting tidbit. Rlobkovsky 00:06, 6 Dec 2004 (UTC) Insert non-formatted text here

If and when URIs start supporting characters beyond ASCII in a standard way, some decomposing must take place, as according to the principles behind unicode the precomposed character ŕ is exactly equivalent to ` + a. Any future internet domain funkčynáme.ext will have to point to the same IP(v6?) address for all its possible decomposings. User:Anárion/sig 12:26, 28 Dec 2004 (UTC)

Sentence

"To address the short coming, Unicode is being revised periodically with the addition of more characters and increase in the size of characters potentially represented in unicode."

It's something of a moot point now, but in case it comes up in the future, the reason I cut that sentences is because it was inaccurate. They don't add more characters to address the shortcoming (one word) that people don't use Unicode; there's probably less than a hundred thousand people who would use any of the scripts that are going to be added to Unicode. And for several of the scripts, like Egyptian Hieroglyphics or Hungrian Runic or Tengwar, there's no commericial interest in the script, and there's little to no academic interest in encoding the script (the Egyptologist community has basically told Unicode to go away and come back in few decades). Hobbyist demand for unencoded scripts isn't a huge shortcoming that Unicode is trying to overcome.

What does "increase in the size of characters potentially represented in unicode" mean? I assume by size, you mean number (since you can increase the size of characters just by using a larger font), but I'm not sure what "potentially" means here. As I read it, it's redundant with "addition of more characters". --Prosfilaes 03:38, 11 Dec 2004 (UTC)

The simplest representation of Unicode (giving every character the same number of bits, rather than a more complicated variable-width encoding) has historically increased from 16 bits to about 20 bits. There is (currently) about 2^20 "potential" characters. I suspect the original author suspected that in the future, *more* than (roughly) 20 bits will be required; and that the consortium is planning to "periodically" increase the number of bits. --DavidCary 22:17, 11 Feb 2005 (UTC)

The consortium doesn't plan to increase the number of bits. In 15 years, two planes of characters have almost been filled, out of 15. Just as importantly, those two planes include virtually every character used in a computer; a few people use Tengwar or pIqaD or Cuneiform or Egyptian hieroglyphics, but they're incredibly rare and they amount to a few thousand characters, not the more than a half million it would take to require expansion. And honestly, if it was a matter of expanding for those or ignoring them, their concerns are minor enough and the changes in every piece of Unicode software major enough I suspect they would get ignored. --Prosfilaes 00:30, 1 Jun 2005 (UTC)
it depends on exactly how you define filled.
the BMP (http://www.unicode.org/roadmaps/bmp/) (plane 0) is basically full mostly with fully allocated and standardised codepoints
the SMP (http://www.unicode.org/roadmaps/smp/) (plane 1) is mostly stuff in various stages of approval but still has quite a bit of room marked as completely unknown (less than half though)
the SIP (http://www.unicode.org/roadmaps/sip/) (plane 2) is more than half filled by "CJK Unified Ideographs Extension B" and most of the rest is pencilled in for yet more CJK stuff.
the SSP (http://www.unicode.org/roadmaps/ssp/) (plane 14) is mostly empty right now
iirc planes 15 and 16 are reserved for private use but i'm not sure.
so if you count the areas that are pencilled in for future scripts then a LOT more than 2 planes are in use.

Revision history has a future date

please justify this. If it is not justified within a few days i will be reverting Plugwash 12:15, 25 Dec 2004 (UTC)

I've reverted it already. Future dates are never justified for this sort of thing, because schedules can change. --Zundark 12:39, 25 Dec 2004 (UTC)

A little clarification about Tolkien's scripts and Klingon?

I don't mean to be a spoilsport, but these bits just don't seem to fit in _at_ all. I was reading through it just then and I thought an anonymous user must have added it in for a laugh. I think a rewording's in order, but perhaps it's just me. I definately don't think it deserves quite as much as has been written about it, though. :-/ Someone care to back me up here, I'm not too sure of myself? Edit: Under the 'Development' Category --Techtoucian 10:16, 07 Jan 2005 (UTC)

I think they fit, if only because they show how the Unicode consortium actually considers scripts which to some seem no more than a 'laughing matter' -- certainly Tengwar and Cirth see more actual use than some of the scripts which are already encoded. User:Anárion/sig 12:41, 7 Jan 2005 (UTC)

Chinese Punctuation

"Unicode also has a number of serious mistakes in the area of CJK punctuation. For example, it mistakenly treats partial punctuation marks in the various CJK encodings as full punctuation marks, for instance treating half of a CJK ellipsis as the same as an English ellipsis, even though the two glyphs are both semantically and visually dissimilar (considering that the CJK ellipsis can be centred between the baseline and ascender, but the English ellipsis must always be placed on the baseline)." --Gniw 06:53, 6 Feb 2005 (added to article)

This page should not be a page of everyone's minor complaints about Unicode. I've read the Unicode list for four or five years, I've read the Standard, I've read both pro- and anti-Unicode pages (including all the Tron pages in English, and they include about every general or Japanese-specific Unicode complaint possible) and I've never heard this before. Given that it seems to be one person's complaint, I don't think it's worthy of being added to an encyclopedia article. --Prosfilaes 21:48, 6 Feb 2005 (UTC)

This is not a minor complaint if you use do bilingual typesetting or write bilingual (Chinese and English) web pages. The result of the ellipsis misidentification in Unicode causes very ugly web pages to result in mixed English-and-Chinese web pages. But given the sad state of punctuation typesetting taught at art schools these days, and the way English computing has changed Chinese typesetting, I'm not surprised that no one has talked about this. Ah Wing 22:49, 9 Feb 2005 (UTC)
I stand on my position. This is an encyclopedia, not a list of what's wrong with Unicode. If there's no English pages on the issue, then most of the people who could fix the issue have never heard of it; and if no one has ever seen fit to bring it before them, I hardly see it as a major issue. I wouldn't post bug reports about a program on Wikipedia, so I don't see this as appropriate.
But please, if someone else has an opinion on this, please chime in.--Prosfilaes 03:45, 11 Feb 2005 (UTC)
Why isn't this a big issue? The triviality of this is precisely the reason it is important; it shows that the Unicode has mistakes that even primary school students should be able to spot, yet here it is in the standard. This just shows how sloppy Unicode is regarding CJK.
Do you really think that if people who are likely to be affected by the issue has mentioned about it, and the discussion happen to be not in English, then it is not an issue?!
What you mean is "the use of English is a requirement for an issue to be recognized as an issue" or "no matter whether people have discussed it or not, if it has never been discussed in English then it cannot possibly be an issue". Or, in short, "English is the measure of all things". If this is not Western imperialism I don't know what it is, and you don't understand why the Japanese are opposed to Unicode? Opposition to Unicode is not really so much of a technical problem but more a perception of a lack of respect, the fact that my contribution was deleted on New Year tells a lot. 24.101.156.72 19:18, 11 Feb 2005 (UTC)
If a Chinese encyclopedia wrote an article complaining about some problem in the English Wikipedia, and they never mentioned it to anyone who could fix it, we'd be a little pissed. Bring the issue before us, and if we choose not to fix it, then there's a valid complaint, but we can't fix what we don't know about. If it doesn't matter enough to bring it to the people who can fix it, or the people discussing it don't respect the standard enough to try and fix it, it's not an important issue.
I think says a lot that you're not discussing the issue, you're complaining about imperalism and that somehow people shouldn't correct articles on holidays. I will repeat again, this is a thirty year old problem made by Chinese standards. You can't do better using Big5 or any other Chinese standard. Which says a lot to me about the importance of the problem.
While we're on the subject of "Western Imperalism", I will note that the US-based Summer Institute of Lingistics and the Ireland-based Michael Everson have been instrumental in getting new scripts (e.g. several Philippine scripts like Buhid) into the standard, while the Japanese standards body sent a letter to the ISO working group asking for such new standards efforts to cease. Such accusations are insulting and provably inaccurate. --Prosfilaes 23:46, 11 Feb 2005 (UTC)
Excuse me. Do you know what a "double byte character set" is? Big5 (as well as GB, EUC-KR, EUC-JP, and Shift JIS) is a DBCS, and by the very nature of a DBCS, you can't encode a whole CJK ellipsis. We have to encode half of the ellipsis. Now when the Unicode committee look at the CJK national character sets and decide that half a CJK ellipsis is equal to a full English ellipsis, that is incredible sloppiness. This is not a "thirty year old problem made by Chinese standards" in the context of Unicode.
And how do you want me to discuss the issue? When whatever I write will simply get deleted. 66.163.1.120 00:05, 12 Feb 2005 (UTC)
It's not incredible sloppiness. It's a unification decision that had some negative side effects. (And we could discuss the incredible sloppiness involved in assuming that every non-ASCII character was double-width, one that still sometimes plagues Russians who get the pleasure of dealing with double-width Cyrillic.) And I want you to discuss it here, on the talk page, instead of making changes on the main page, until some sort of consensus is reached. (And I'd really like a third party to chime in.) --Prosfilaes 01:32, 12 Feb 2005 (UTC)
I cannot understand why this is not sloppiness. The two are completely different. As I originally wrote, (1) they are different in form (the CJK ellipsis can be set on the baseline, or between the baseline and the ascender; the English ellipsis can only be set on the baseline) and (2) they are different in meaning (two "ideographic three dot leader"s, as some Japanese people think it should be called (http://cl.cocolog-nifty.com/dtp/2004/09/u2026_horizonal.html), are required to make one true ellipsis, the leader itself is meaningless; one "horizontal ellipsis" (U+2026) is meaningful by itself). The two cannot be unified no matter whether they consider unification to be based on form or on meaning.
Ok, you might argue that this only means they are unable to spot the differences. But they go into so much effort into distinguishing between almost-indistinguishable variations in ideogram forms (many are really typographic stylistic variations that unfortunately came to be associated with different countries), not making comparable effort in distinguishing these two glyphs certainly sounds extremely strange. Even if they had checked the punctuation sections of a Chinese or Japanese dictionary they would have realized that the "ideographic three dot leader" is not itself a punctuation mark. And this has the added benefit that dictionaries usually set the ellipsis between the baseline and the ascender, so they would simultaneously realize that the two are different in form. In short, there is simply no basis for "unification": Yet they got "unified". Aside from "incredible sloppiness" I really cannot explain this.
(I do accept that Unicode unifications are sometimes based on form, though I think this is contrary to the spirit of Unicode unifications. I personally don't like the CJK unification myself, and you won't understand why I feel this way until you try to work on a Unicode font yourself. But if you ask for my objections to unification decisions, I'll say the unification of the umlaut and the diaeresis really make no sense considering they dis-unify a lot of other things (I'm talking about western script, not CJK) that look 100% identical. In the case of the CJK vs English ellipses, form is not even a question, since they are different in form.)
I do agree with the double-width mess. For us the opposite problem occurs, that all the box-drawing characters become single-width, making Unicode almost useless in terminal emulators if box-drawing characters are to appear anywhere. --Wing 03:45, 12 Feb 2005 (UTC)
First, I stand by my point: for 15 years, this unification has stood, and no one has complained to Unicode. For probably ten of those years, there would have been no problem disunifying the characters, yet not a single standards body made the request. If they were so completely inappropriately unified, there has been incrediably sloppiness and apathy on the basis of the users of the affected scripts.
You make too many assumptions about what I do and don't understand. I believe I understand the reasons why people disagree with CJK unification, and seriously doubt that making a font would make a bit of difference. The whole question is whether the difference is a difference in preferred fonts or a difference in script.
You are apparently a splitter. Besides the fundamental backward compatibility problems, I can't imagine trying to explain to the people at Distributed Proofreaders that coöperate uses a different ö from Köln. Splitting these would cause a world of pain to the advantage of a few librarians. In any case, the various opinions on when to split and when to unify a much more general and interesting topic to add to the page. --Prosfilaes 00:31, 13 Feb 2005 (UTC)
Well, I think I am correct in assuming that you have never worked on a Unicode font. Before I attempted to work on a Unicode font some time ago, I thought just like you (being content with the state of the Han unification).
In the current state of the Han unification, there are many characters that are not unified. However, after adding a radical, the new characters are all unified.
If I want to make one Unicode font containing all the ideograms (not an unreasonable thing, since making such a font requires so much effort), which style should I choose? If adding the radicals would not make the new characters unified, I'd be all happy too (it would just mean that all variants are distinguished, as opposed to variants being not distinguished); as it is, no matter which style I choose, I end up with a font that is wrong.
Regarding the ellipsis itself, it is not a difference in font. Would you consider an ellipsis-like glyph that is raised above the baseline (to about x-height) suitable for typesetting English? From your viewpoint, this is exactly what unification of U+2026 and the hypothetical "ideographic three dot leader" means.
In a sense, the mis-unification of the ellipsis and the "ideographic three dot leader" can be thought of as equivalent to the problem of having full-width Cyrillic letters (in that both mistakenly equates a glyph that's only appropriate in C/J/K with an incompatible western glyph). If you find full-width Cyrillic letters unacceptable and is "incredible sloppiness", I fail to understand why an ellipsis raised to x-height for English is acceptable or is not the result of sloppiness.
I would not object to your calling us having "incredible apathy" regarding Unicode. We have already acquired "incredible apathy" after using the suboptimal national character sets for so long; and many of our typesetting and/or punctuation conventions have been destroyed by Western-centric computing for so long (can you imagine just about ten years ago even westerners know that in C/J/K, numbers should be grouped by myriads, but now many Chinese do not even know this, but rather group digits by thousands and then laborously count the digits every time a large number is being read… and many Chinese are so used to western-style underlining that they are now desensitized with the grammatical mistakes they are making every time they underline Chinese words that are not proper names…) I definitely think that this is pathetic enough, and there is no need for Unicode to make this kind of mistakes to further worsen the situation.
I am not saying that the knowledge of proper punctuation has not deteriorated in the West; but at least the deterioration has not been codified into an international standard (unless I count this ellipsis mis-unification)… --Wing 04:30, 13 Feb 2005 (UTC)
PS: Perhaps there is; other than this ellipsis thing, there is also this hyphen-dash confusion. It seems to be just as bad…
afaict the hyphen-dash issue comes from the fact that ascii and other encodings of its era came from the days when charactors on computers were fixed width. given that and the limited number of code values availible in ascii it seemed totally reasonable to unify the hyphen dashes and minus signs. There was also the unification of beta and sharp s in ibm code page 437 Plugwash 02:46, 1 Jun 2005 (UTC)

Revision history year-wikilinks

The year wikilinks in the revisions list are a little confusing; I clicked through thinking I was going to be led to that particular revision, but found myself on a general-year page. Could you reconsider these links please? Thanks. Courtland

Unicode adoption in e-mail

The adoption of Unicode in e-mail has been very slow. Most East-Asian text is still encoded in a local encoding such as ISO-2022-JP, and many commonly used e-mail programs still cannot handle Unicode data correctly. This situation is not expected to change in the foreseeable future.

This doesn't look like an accurate picture to me. Mac OS X's default Mail.app client has transparently supported Unicode since 2001. Didn't Windows 95's Internet Mail and News or Outlook Express have Unicode support even earlier? I don't know how widely used Unicode is, but hasn't it been very widely supported for years? Michael Z. 2005-04-12 21:20 Z

Keep in mind that that some programs support unicode does not mean they can handle text encoded in unicode correctly. The situation may have changed since then, but I used to hear that you should not send mails in unicode because many programs have problems with them. You see I heard a report that even gmail does not correctly handle the subject of e-mails. More research would certainly help, but I don't think the above is far from the reality. -- Taku 02:35, Apr 13, 2005 (UTC)

Input methods

On Windows XP, any Unicode character can be input by pressing Alt, then, with Alt down (and using only the numeric keypad keys), pressing the decimal digits of the Unicode characters one after the other. For example, Alt, then, with Alt still down, 9, then 6 and then 0 yields π (Greek lowercase letter Pi). For values less than 256, precede the digits with a 0, to avoid code page translation (see Extended ASCII), e.g. Alt 0, 1, 6, 5 yields Ą.

This just doesn't work when I try it. Pressing Alt-9-6-0 gives me └, which appears to be "Box Drawings Light Up And Right", character x2514/9,492 (└). However, Alt-0-x-x-x does work for me and always has (I can get the yen symbol fine). Does this statement need correction or clarification? —Simetrical (talk) 01:57, 8 May 2005 (UTC)

Forgot to mention, I do use Windows XP, English-language SP 2 to be precise. —Simetrical (talk) 02:31, 8 May 2005 (UTC)

I use WinXP, Spanish-language SP2, and it does not work for me, either. Nor does it work for anyone I know who uses WinXP, either. By the way, the character '└' can also be obtained by pressing Alt+192 - moreover, I have found that under WinXP, Alt+number produces the same output as Alt+number modulo 256 (provided that any zeroes before the original number are preserved). So, Alt+289 produces '!', Alt+416 produces 'á', and Alt+0416 produces ' ', the non-breaking-space.
I think that paragraph should be removed. --Fibonacci 21:53, 21 May 2005 (UTC)
it seems to depend on the edit control in use. it seems stuff that uses the standard edit (e.g. notepad) doesn't allow unicode entry with alt+numpad whereas stuff that uses the standard richedit (e.g. wordpad) does (tested on english winxp non-sp2 not sure if its original or sp1). Plugwash 22:37, 21 May 2005 (UTC)
The way I understand it, a four-digit or longer number enters the Unicode character. A three-digit number under 256 enters the character in the current code page, which I suppose would be Win CP-1252 for English and some European languages (don't know if that includes Spanish). It appears that three-digit numbers over 255 are processed with some funky math (Shouldn't numbers over 255 be Unicode? Can anyone think of a reason for using modulo-256 except programmer laziness?). Michael Z. 2005-05-25 17:45 Z
NO NO NO
in apps that use the windows EDIT control (ie notepad) you CANNOT enter unicode with alt+numpad (unless the app makes special provisions which some apps seem to do) and numbers entered with alt+numpad are treated modulo 256 regardless of lengh
in apps that use the windows RICHEDIT control numbers over 256 and all numbers 4 digits or more are unicode (for numbers like 052 the local code page matches unicode anyway so its impossible to really tell)
other apps that set up thier own edit controls may behave differently again.Plugwash 18:40, 25 May 2005 (UTC)

Nifty resource.

I found, at some point, a nifty resource for Unicode at fileformat.info (http://www.fileformat.info/info/unicode/index.htm). It has some rather decent tools for looking up individual codepoints, like U+0023 (http://www.fileformat.info/info/unicode/char/0023/index.htm) or U+20AC (http://www.fileformat.info/info/unicode/char/20ac/index.htm). Each page includes a browser test (http://www.fileformat.info/info/unicode/char/20ac/browsertest.htm) and font support info (http://www.fileformat.info/info/unicode/char/20ac/fontsupport.htm). Perhaps it would be useful to link U+F00F the same way we link PMID, ISBN and RFC IDs now. grendel|khan 16:50, 2005 May 25 (UTC)

Unicode 4.1.0

Can someone give me a link so that I can download Unicode 4.1.0 for free? JarlaxleArtemis 00:14, May 27, 2005 (UTC)

http://www.unicode.orgMonedula 05:56, 27 May 2005 (UTC)
Navigation
  • Home Page (http://academickids.com/encyclopedia/index.php)
  • Art and Cultures
    • Art (https://academickids.com/encyclopedia/index.php/Art)
    • Architecture (https://academickids.com/encyclopedia/index.php/Architecture)
    • Cultures (https://www.academickids.com/encyclopedia/index.php/Cultures)
    • Music (https://www.academickids.com/encyclopedia/index.php/Music)
    • Musical Instruments (http://academickids.com/encyclopedia/index.php/List_of_musical_instruments)
  • Biographies (http://www.academickids.com/encyclopedia/index.php/Biographies)
  • Clipart (http://www.academickids.com/encyclopedia/index.php/Clipart)
  • Geography (http://www.academickids.com/encyclopedia/index.php/Geography)
    • Countries of the World (http://www.academickids.com/encyclopedia/index.php/Countries)
    • Maps (http://www.academickids.com/encyclopedia/index.php/Maps)
    • Flags (http://www.academickids.com/encyclopedia/index.php/Flags)
    • Continents (http://www.academickids.com/encyclopedia/index.php/Continents)
  • History (http://www.academickids.com/encyclopedia/index.php/History)
    • Ancient Civilizations (http://www.academickids.com/encyclopedia/index.php/Ancient_Civilizations)
    • Industrial Revolution (http://www.academickids.com/encyclopedia/index.php/Industrial_Revolution)
    • Middle Ages (http://www.academickids.com/encyclopedia/index.php/Middle_Ages)
    • Prehistory (http://www.academickids.com/encyclopedia/index.php/Prehistory)
    • Renaissance (http://www.academickids.com/encyclopedia/index.php/Renaissance)
    • Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
    • United States (http://www.academickids.com/encyclopedia/index.php/United_States)
    • Wars (http://www.academickids.com/encyclopedia/index.php/Wars)
    • World History (http://www.academickids.com/encyclopedia/index.php/History_of_the_world)
  • Human Body (http://www.academickids.com/encyclopedia/index.php/Human_Body)
  • Mathematics (http://www.academickids.com/encyclopedia/index.php/Mathematics)
  • Reference (http://www.academickids.com/encyclopedia/index.php/Reference)
  • Science (http://www.academickids.com/encyclopedia/index.php/Science)
    • Animals (http://www.academickids.com/encyclopedia/index.php/Animals)
    • Aviation (http://www.academickids.com/encyclopedia/index.php/Aviation)
    • Dinosaurs (http://www.academickids.com/encyclopedia/index.php/Dinosaurs)
    • Earth (http://www.academickids.com/encyclopedia/index.php/Earth)
    • Inventions (http://www.academickids.com/encyclopedia/index.php/Inventions)
    • Physical Science (http://www.academickids.com/encyclopedia/index.php/Physical_Science)
    • Plants (http://www.academickids.com/encyclopedia/index.php/Plants)
    • Scientists (http://www.academickids.com/encyclopedia/index.php/Scientists)
  • Social Studies (http://www.academickids.com/encyclopedia/index.php/Social_Studies)
    • Anthropology (http://www.academickids.com/encyclopedia/index.php/Anthropology)
    • Economics (http://www.academickids.com/encyclopedia/index.php/Economics)
    • Government (http://www.academickids.com/encyclopedia/index.php/Government)
    • Religion (http://www.academickids.com/encyclopedia/index.php/Religion)
    • Holidays (http://www.academickids.com/encyclopedia/index.php/Holidays)
  • Space and Astronomy
    • Solar System (http://www.academickids.com/encyclopedia/index.php/Solar_System)
    • Planets (http://www.academickids.com/encyclopedia/index.php/Planets)
  • Sports (http://www.academickids.com/encyclopedia/index.php/Sports)
  • Timelines (http://www.academickids.com/encyclopedia/index.php/Timelines)
  • Weather (http://www.academickids.com/encyclopedia/index.php/Weather)
  • US States (http://www.academickids.com/encyclopedia/index.php/US_States)

Information

  • Contact Us (http://www.academickids.com/encyclopedia/index.php/Contactus)

  • Clip Art (http://classroomclipart.com)
Toolbox
Personal tools