我遇到了BS4的一些奇怪行为。我已经复制了要刮掉的网站的20页,并且此代码在我的私有Web服务器上可以正常工作。当我在实际网站上使用它时,它会随机丢失一行的第8列。我之前没有经历过,而且似乎找不到与此问题相关的其他文章。第八列是“ frequency_rank”。这是怎么回事,这仅发生在最后一列,我该如何解决?
import requests
import json
from bs4 import BeautifulSoup
base_url = 'http://hanzidb.org'
def soup_the_page(page_number):
url = base_url + '/character-list/by-frequency?page=' + str(page_number)
response = requests.get(url, timeout=5)
soup = BeautifulSoup(response.content, 'html.parser')
return soup
def get_max_page(soup):
paging = soup.find_all("p", {'class': 'rigi'})
# Isolate the first paging link
paging_link = paging[0].find_all('a')
# Extract the last page number of the series
max_page_num = int([item.get('href').split('=')[-1] for item in paging_link][-1])
return max_page_num
def crawl_hanzidb():
result = {}
# Get the page scrape data
page_content = soup_the_page(1)
# Get the page number of the last page
last_page = get_max_page(page_content)
# Get the table data
for p in range(1, last_page + 1):
page_content = soup_the_page(p)
for trow in page_content.find_all('tr')[1:]:
char_dict = {}
i = 0
# Set the character as the dict key
character = trow.contents[0].text
# Initialize list on dict key
result[character] = []
# Return list of strings from trow.children to parse urls
for tcell in trow.children:
char_position = 0
radical_position = 3
if i == char_position or i == radical_position:
for content in tcell.children:
if type(content).__name__ == 'Tag':
if 'href' in content.attrs:
url = base_url + content.attrs.get('href')
if i == char_position:
char_dict['char_url'] = url
if i == radical_position:
char_dict['radical_url'] = url
i += 1
char_dict['radical'] = trow.contents[3].text[:1]
char_dict['pinyin'] = trow.contents[1].text
char_dict['definition'] = trow.contents[2].text
char_dict['hsk_level'] = trow.contents[5].text[:1] if trow.contents[5].text[:1].isdigit() else ''
char_dict['frequency_rank'] = trow.contents[7].text if trow.contents[7].text.isdigit() else ''
result[character].append(char_dict)
print('Progress: ' + str(p) + '%.')
return(result)
crawl_data = crawl_hanzidb()
with open('hanzidb.json', 'w') as f:
json.dump(crawl_data, f, indent=2, ensure_ascii=False)
最佳答案
问题似乎是该网站的HTML格式有误。如果您查看发布的网站的来源,则在频率等级列之前有两个关闭的</td>
标签。例:
<tr>
<td><a href="/character/的">的</a></td>
<td>de</td><td><span class="smmr">possessive, adjectival suffix</span></td>
<td><a href="/character/白" title="Kangxi radical 106">白</a> 106.3</td>
<td>8</td><td>1</td>
<td>1155</td></td>
<td>1</td>
</tr>
我认为这会导致您正在使用的解析器(
html.parser
)出现问题。如果安装lxml
解析器,它似乎可以工作。尝试这个:
首先,安装
lxml
解析器...pip install lxml
然后,在您的
soup_the_page()
方法中更改解析器:soup = BeautifulSoup(response.content, 'lxml')
然后运行您的脚本。它似乎有效。
print(trow.contents[7].text)
不再给出索引超出范围错误。关于python - 表格爬取了BeautifulSoup4缺失的细胞,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/54606008/