我一直在修改以下脚本:

#    -*- coding: utf8 -*-
import codecs
from BeautifulSoup import BeautifulSoup, NavigableString,
UnicodeDammit
import urllib2,sys
import time
try:
    import timeoutsocket # http://www.timo-tasi.org/python/timeoutsocket.py
    timeoutsocket.setDefaultSocketTimeout(10)
except ImportError:
    pass

h=u'\u3000\u3000\u4fe1\u606f\u901a\u4fe1\u6280\u672f'

address=urllib2.urlopen('http://stock.eastmoney.com/news/1408,20101022101395594.html').read()
soup=BeautifulSoup(address)

p=soup.findAll('p')
t=p[2].string[:10]

输出如下:
打印t
¡¡¡¡ÐÅϢͨ
打印h
  信息通
t型
u'\xa1\xa1\xa1\xa1\xd0\xc5\xcf\xa2\xcd\xa8'
小时
u'\u3000\u3000\u4fe1\u606f\u901a'
h.encode('gbk')
'\xa1\xa1\xa1\xa1\xd0\xc5\xcf\xa2\xcd\xa8'
简单地说:当我通过BeautifulSoup传入这个html时
gbk编码的文本并认为它是unicode,而不是识别
需要先解码。”“h”和“t”应该是一样的,
因为h只是我从html文件中提取文本并转换它
手动操作。
我该如何解决这个问题?
最好的
惠顿

最佳答案

The file's meta tag claims that the character set is GB2312, but the data contains a character from the newer GBK/GB18030 and this is what's tripping BeautifulSoup up:

simon@lucifer:~$ python
Python 2.7 (r27:82508, Jul 3 2010, 21:12:11)
[GCC 4.0.1 (Apple Inc. build 5493)] on darwin
Type "help", "copyright", "credits" or "license" for more information.
>>> import urllib2
>>> data = urllib2.urlopen('http://stock.eastmoney.com/news/1408,20101022101395594.html').read()
>>> data.decode("gb2312")
Traceback (most recent call last):
File "", line 1, in
UnicodeDecodeError: 'gb2312' codec can't decode bytes in position 20148-20149: illegal multibyte sequence


At this point, UnicodeDammit bails out, tries chardet, UTF-8 and finally Windows-1252, which always succeeds - this is what you got, by the looks of it.

If we tell the decoder to replace unrecognised characters with a '?', we can see the character that's missing in GB2312:

>>> print data[20140:20160].decode("gb2312", "replace")
毒尾气二�英的排放难


Using the correct encoding:

>>> print data[20140:20160].decode("gb18030", "replace")
毒尾气二噁英的排放难
>>> from BeautifulSoup import BeautifulSoup
>>> s = BeautifulSoup(data, fromEncoding="gb18030")
>>> print s.findAll("p")[2].string[:10]
  信息通信技术是&


Also:

>>> print s.findAll("p")[2].string
  信息通信技术是“十二五”规划重点发展方向,行业具有很强的内在增长潜
力,增速远高于GDP。软件外包、服务外包、管理软件、车载导航、网上购物、网络游戏、
移动办公、移动网络游戏、网络视频等均存在很强的潜在需求,使信息技术行业继续保持较
高增长。

关于python - Python-BeautifulSoup html解析句柄gbk编码不佳-中国网络抓屏问题,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/4092606/

10-10 12:48