我最近发布了要求从黄页中抓取数据的方法,@ alecxe通过向我展示了一些新的方式来提取数据,从而帮助了很多人,但我又被卡住了,想在黄页中为每个链接抓取数据,以便获得黄页页面上有更多数据。我想添加一个名为“ url”的变量并获取企业的href,而不是实际的企业网站,而是企业的黄页页面。我尝试过各种方法,但似乎无济于事。 href在“ class = business-name”下。

import csv
import requests
from bs4 import BeautifulSoup


with open('cities_louisiana.csv','r') as cities:
    lines = cities.read().splitlines()
cities.close()

for city in lines:
    print(city)
url = "http://www.yellowpages.com/search?search_terms=businesses&geo_location_terms="baton%rouge+LA&page="+str(count)

for city in lines:
    for x in range (0, 50):
        print("http://www.yellowpages.com/search?search_terms=businesses&geo_location_terms=baton%rouge+LA&page="+str(x))
        page = requests.get("http://www.yellowpages.com/search?search_terms=businesses&geo_location_terms=baton%rouge+LA&page="+str(x))
        soup = BeautifulSoup(page.text, "html.parser")
        for result in soup.select(".search-results .result"):
            try:
                name = result.select_one(".business-name").get_text(strip=True, separator=" ")
            except:
                pass
            try:
                streetAddress = result.select_one(".street-address").get_text(strip=True, separator=" ")
            except:
                pass
            try:
                city = result.select_one(".locality").get_text(strip=True, separator=" ")
                city = city.replace(",", "")
                state = "LA"
                zip = result.select_one('span[itemprop$="postalCode"]').get_text(strip=True, separator=" ")
            except:
                pass

            try:
                telephone = result.select_one(".phones").get_text(strip=True, separator=" ")
            except:
                telephone = "No Telephone"
            try:
                categories = result.select_one(".categories").get_text(strip=True, separator=" ")
            except:
                categories = "No Categories"
            completeData = name, streetAddress, city, state, zip, telephone, categories
            print(completeData)
            with open("yellowpages_businesses_louisiana.csv", "a", newline="") as write:
                wrt = csv.writer(write)
                wrt.writerow(completeData)
                write.close()

最佳答案

您应该实现的多项功能:


从具有href类的元素的business-name属性中提取业务链接-在BeautifulSoup中,这可以通过“处理”像字典这样的元素来完成
使用urljoin()将链接设为绝对
在维持网页抓取会话的同时向业务页面发出请求
也用BeautifulSoup解析业务页面并提取所需的信息
添加时间延迟以避免访问该网站的次数过多


完整的工作示例,它从搜索结果页面中打印出公司名称,并从业务资料页面中打印出业务描述:

from urllib.parse import urljoin

import requests
import time
from bs4 import BeautifulSoup


url = "http://www.yellowpages.com/search?search_terms=businesses&geo_location_terms=baton%rouge+LA&page=1"


with requests.Session() as session:
    session.headers = {'User-Agent': 'Mozilla/5.0 (Macintosh; Intel Mac OS X 10_12_2) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/55.0.2883.95 Safari/537.36'}

    page = session.get(url)
    soup = BeautifulSoup(page.text, "html.parser")
    for result in soup.select(".search-results .result"):
        business_name_element = result.select_one(".business-name")
        name = business_name_element.get_text(strip=True, separator=" ")

        link = urljoin(page.url, business_name_element["href"])

        # extract additional business information
        business_page = session.get(link)
        business_soup = BeautifulSoup(business_page.text, "html.parser")
        description = business_soup.select_one("dd.description").text

        print(name, description)

        time.sleep(1)  # time delay to not hit the site too often

07-24 09:25