本文介绍了将 UTF-16 转换为 UTF-8 并删除 BOM?的处理方法,对大家解决问题具有一定的参考价值,需要的朋友们下面随着小编来一起学习吧!
问题描述
我们有一个数据输入人员,他在 Windows 上以 UTF-16 编码,并希望使用 utf-8 并删除 BOM.utf-8 转换有效,但 BOM 仍然存在.我将如何删除它?这是我目前拥有的:
We have a data entry person who encoded in UTF-16 on Windows and would like to have utf-8 and remove the BOM. The utf-8 conversion works but BOM is still there. How would I remove this? This is what I currently have:
batch_3={'src':'/Users/jt/src','dest':'/Users/jt/dest/'}
batches=[batch_3]
for b in batches:
s_files=os.listdir(b['src'])
for file_name in s_files:
ff_name = os.path.join(b['src'], file_name)
if (os.path.isfile(ff_name) and ff_name.endswith('.json')):
print ff_name
target_file_name=os.path.join(b['dest'], file_name)
BLOCKSIZE = 1048576
with codecs.open(ff_name, "r", "utf-16-le") as source_file:
with codecs.open(target_file_name, "w+", "utf-8") as target_file:
while True:
contents = source_file.read(BLOCKSIZE)
if not contents:
break
target_file.write(contents)
如果我 hexdump -C 我看到:
If I hexdump -C I see:
Wed Jan 11$ hexdump -C svy-m-317.json
00000000 ef bb bf 7b 0d 0a 20 20 20 20 22 6e 61 6d 65 22 |...{.. "name"|
00000010 3a 22 53 61 76 6f 72 79 20 4d 61 6c 69 62 75 2d |:"Savory Malibu-|
在生成的文件中.如何删除 BOM?
in the resulting file. How do I remove the BOM?
谢谢
推荐答案
只需使用 str.解码
和str.encode
:
with open(ff_name, 'rb') as source_file:
with open(target_file_name, 'w+b') as dest_file:
contents = source_file.read()
dest_file.write(contents.decode('utf-16').encode('utf-8'))
str.decode
将为您摆脱 BOM(并推断字节序).
str.decode
will get rid of the BOM for you (and deduce the endianness).
这篇关于将 UTF-16 转换为 UTF-8 并删除 BOM?的文章就介绍到这了,希望我们推荐的答案对大家有所帮助,也希望大家多多支持!