我正在尝试抓取http://www.funda.nl/koop/amsterdam/网站,该网站列出了阿姆斯特丹要出售的房屋。主页包含许多链接,其中一些是指向待售房屋的链接。我最终希望遵循这些链接并从中提取数据。
首先,我试图简单列出与各个房屋相对应的链接。我注意到它们的URL包含“ huis-”,后跟8位代码-例如http://www.funda.nl/koop/amsterdam/huis-49801910-claus-van-amsbergstraat-86/。我想使用正则表达式r'huis-\d{8}'
匹配URL的此子集。
我正在尝试使用Scrapy的LinkExtractor
来执行此操作,但是它似乎没有用。我写的蜘蛛如下:
import scrapy
from scrapy.spiders import CrawlSpider, Rule
from scrapy.linkextractors import LinkExtractor
from Funda.items import FundaItem
from scrapy.shell import inspect_response
class FundaSpider(CrawlSpider):
name = "Funda"
allowed_domains = ["funda.nl"]
start_urls = ["http://www.funda.nl/koop/amsterdam/"]
le1 = LinkExtractor()
rules = (
Rule(LinkExtractor(allow=r'huis-\d{8}'), callback='parse_item'),
)
def parse_item(self, response):
links = self.le1.extract_links(response)
for link in links:
item = FundaItem()
item['url'] = link.url
print("The item is "+str(item))
yield item
在主项目目录中,如果我运行
scrapy crawl Funda -o funda.json
,则生成的funda.json
文件以以下几行开头:[
{"url": "http://www.funda.nl/cookiebeleid/"},
{"url": "http://www.funda.nl/koop/amsterdam/huis-49728947-emmy-andriessestraat-374/ufsavqdqfvxyerrvff.html"},
{"url": "http://www.funda.nl/koop/amsterdam/huis-49728947-emmy-andriessestraat-374/"},
{"url": "http://www.funda.nl/koop/"},
{"url": "https://www.funda.nl/mijn/login/?ReturnUrl=%2Fkoop%2Famsterdam%2Fhuis-49728947-emmy-andriessestraat-374%2F"},
{"url": "https://www.funda.nl/mijn/aanmelden/?ReturnUrl=%2Fkoop%2Famsterdam%2Fhuis-49728947-emmy-andriessestraat-374%2F"},
{"url": "http://www.funda.nl/language/switchlanguage/?language=en&returnUrl=%2Fkoop%2Famsterdam%2Fhuis-49728947-emmy-andriessestraat-374%2F"},
{"url": "https://help.funda.nl/hc/nl/categories/200207038"},
{"url": "http://www.funda.nl/koop/amsterdam/"},
如您所见,它包含几行带有链接的行,其中没有“ huis-”或8位代码。我如何将其过滤为仅“真正的”房屋链接?
最佳答案
问题在于正则表达式位于rules
参数的定义中,而不位于le1
的定义中。将其添加到le1
的定义中可以得到预期的输出。
关于python - 在Scrapy的LinkExtractor中使用“allow”关键字,我们在Stack Overflow上找到一个类似的问题:https://stackoverflow.com/questions/38351744/