问题描述
我是否有任何可能的原因,为什么使用unicode字符串文字而不是UChar的实际十六进制值会看到不同的结果.
Is there any conceivable reason why I would see different results using unicode string literals versus the actual hex value for the UChar.
UnicodeString s1(0x0040); // @ sign
UnicodeString s2("\u0040");
s1不等于s2.为什么?
s1 isn't equivalent to s2. Why?
推荐答案
\ u转义序列AFAIK是实现定义的,因此很难在不了解特定编译器细节的情况下说为什么它们不等效.也就是说,这绝对不是一种安全的做事方式.
The \u escape sequence AFAIK is implementation defined, so it's hard to say why they are not equivalent without knowing details on your particular compiler. That said, it's simply not a safe way of doing things.
UnicodeString具有一个构造函数,该构造函数采用一个UChar,一个用于UChar32.使用它们时我会很明确:
UnicodeString has a constructor taking a UChar and one for UChar32. I'd be explicit when using them:
UnicodeString s(static_cast<UChar>(0x0040));
UnicodeString还提供了 unescape()方法方便:
UnicodeString also provide an unescape() method that's fairly handy:
UnicodeString s = UNICODE_STRING_SIMPLE("\\u4ECA\\u65E5\\u306F").unescape(); // 今日は
这篇关于UnicodeString w/String Literals vs Hex Values的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!