因此,我正在使用BeautifulSoup构建一个网络抓取器,以抓取Craigslist页面上的每个广告。到目前为止,这是我得到的:
import requests
from bs4 import BeautifulSoup, SoupStrainer
import bs4
page = "http://miami.craigslist.org/search/roo?query=brickell"
search_html = requests.get(page).text
roomSoup = BeautifulSoup(search_html, "html.parser")
ad_list = roomSoup.find_all("a", {"class":"hdrlnk"})
#print ad_list
ad_ls = [item["href"] for item in ad_list]
#print ad_ls
ad_urls = ["miami.craigslist.org" + ad for ad in ad_ls]
#print ad_urls
url_str = [str(unicode) for unicode in ad_urls]
# What's in url_str?
for url in url_str:
print url
运行此命令时,我得到:
miami.craigslist.org/mdc/roo/4870912192.html
miami.craigslist.org/mdc/roo/4858122981.html
miami.craigslist.org/mdc/roo/4870665175.html
miami.craigslist.org/mdc/roo/4857247075.html
miami.craigslist.org/mdc/roo/4870540048.html ...
这正是我想要的:一个包含页面上每个广告的URL的列表。
我的下一步是从每个页面中提取一些内容。因此建立另一个BeautifulSoup对象。但是我停下来了:
for url in url_str:
ad_html = requests.get(str(url)).text
在这里,我们终于要问我的问题:这个错误到底是什么?我唯一能理解的是最后两行:
Traceback (most recent call last): File "webscraping.py", line 24,
in <module>
ad_html = requests.get(str(url)).text File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/api.py",
line 65, in get
return request('get', url, **kwargs) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/api.py",
line 49, in request
response = session.request(method=method, url=url, **kwargs) File
"/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/sessions.py",
line 447, in request
prep = self.prepare_request(req) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/sessions.py",
line 378, in prepare_request
hooks=merge_hooks(request.hooks, self.hooks), File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/models.py",
line 303, in prepare
self.prepare_url(url, params) File "/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/site-packages/requests/models.py",
line 360, in prepare_url
"Perhaps you meant http://{0}?".format(url)) requests.exceptions.MissingSchema: Invalid URL
u'miami.craigslist.org/mdc/roo/4870912192.html': No schema supplied.
Perhaps you meant http://miami.craigslist.org/mdc/roo/4870912192.html?
看来问题在于我的所有链接都以u'开头,因此request.get()无法正常工作。这就是为什么您看到我几乎尝试使用str()将所有URL强制为常规字符串的原因。但是,无论我做什么,都会收到此错误。还有其他我想念的东西吗?我是否完全误解了我的问题?
在此先感谢!
最佳答案
看起来您误解了问题
消息:
u'miami.craigslist.org/mdc/roo/4870912192.html': No schema supplied.
Perhaps you meant http://miami.craigslist.org/mdc/roo/4870912192.html?
表示网址前缺少
http://
(架构)所以更换
ad_urls = ["miami.craigslist.org" + ad for ad in ad_ls]
通过
ad_urls = ["http://miami.craigslist.org" + ad for ad in ad_ls]
应该做的工作