我正试图从一个网页中提取一些关于mtg卡的信息,程序如下,但我反复检索有关给定初始页面(InitUrl)的信息。爬网程序无法继续。我开始相信我没有使用正确的url,或者在使用urllib时可能有一个限制,这让我没有注意到。这是我几周来一直在努力的准则:

import re
from math import ceil
from urllib.request import urlopen as uReq, Request
from bs4 import BeautifulSoup as soup

InitUrl = "https://mtgsingles.gr/search?q=dragon"
NumOfCrawledPages = 0
URL_Next = ""
NumOfPages = 4   # depth of pages to be retrieved

query = InitUrl.split("?")[1]


for i in range(0, NumOfPages):
    if i == 0:
        Url = InitUrl
    else:
        Url = URL_Next

    print(Url)

    UClient = uReq(Url)  # downloading the url
    page_html = UClient.read()
    UClient.close()

    page_soup = soup(page_html, "html.parser")

    cards = page_soup.findAll("div", {"class": ["iso-item", "item-row-view"]})

    for card in cards:
        card_name = card.div.div.strong.span.contents[3].contents[0].replace("\xa0 ", "")

        if len(card.div.contents) > 3:
            cardP_T = card.div.contents[3].contents[1].text.replace("\n", "").strip()
        else:
            cardP_T = "Does not exist"

        cardType = card.contents[3].text
        print(card_name + "\n" + cardP_T + "\n" + cardType + "\n")

    try:
        URL_Next = InitUrl + "&page=" + str(i + 2)

        print("The next URL is: " + URL_Next + "\n")
    except IndexError:
        print("Crawling process completed! No more infomation to retrieve!")
    else:
        NumOfCrawledPages += 1
        Url = URL_Next
    finally:
        print("Moving to page : " + str(NumOfCrawledPages + 1) + "\n")

最佳答案

代码失败的原因之一是,您不使用cookies。网站似乎需要这些来允许分页。
提取感兴趣数据的一种简洁方法如下:

import requests
from bs4 import BeautifulSoup

# the site actually uses this url under the hood for paging - check out Google Dev Tools
paging_url = "https://mtgsingles.gr/search?ajax=products-listing&lang=en&page={}&q=dragon"
return_list = []
# the page-scroll will only work when we support cookies
# so we fetch the page in a session
session = requests.Session()
session.get("https://mtgsingles.gr/")

除最后一页外,所有页都有一个“下一页”按钮。所以我们用这些知识循环直到下一个按钮消失。当它确实-意味着到达了最后一页-按钮被替换为一个“li”-标记,类为“next hidden”。这只存在于最后一页
现在我们可以开始循环了
page = 1 # set count for start page
keep_paging = True # use flag to end loop when last page is reached
while keep_paging:
    print("[*] Extracting data for page {}".format(page))
    r = session.get(paging_url.format(page))
    soup = BeautifulSoup(r.text, "html.parser")
    items = soup.select('.iso-item.item-row-view.clearfix')
    for item in items:
        name = item.find('div', class_='col-md-10').get_text().strip().split('\xa0')[0]
        toughness_element = item.find('div', class_='card-power-toughness')
        try:
            toughness = toughness_element.get_text().strip()
        except:
            toughness = None
        cardtype = item.find('div', class_='cardtype').get_text()
        card_dict = {
            "name": name,
            "toughness": toughness,
            "cardtype": cardtype
        }
        return_list.append(card_dict)

    if soup.select('li.next.hidden'): # this element only exists if the last page is reached
        keep_paging = False
        print("[*] Scraper is done. Quitting...")
    else:
        page += 1

# do stuff with your list of dicts - e.g. load it into pandas and save it to a spreadsheet

这将滚动,直到没有更多的页面存在-不管有多少个页面将在网站上。
我在上面的评论中的观点仅仅是,如果在代码中遇到异常,那么pagecount永远不会增加。这可能不是你想做的,这就是为什么我建议你对整个尝试的行为多了解一点,除非最终达成协议。

10-02 08:51