举例网站:http://www.luoo.net/music/期刊号
e.g:http://www.luoo.net/music/760
打算爬取其title:Hello World;pic;desc:本期音乐为......《8-bit Love》。
步骤:
1):建立项目
在shell中你对应的目录下:scrapy startproject luoo
在pycharm中打开luoo文件夹
2):编写items.py
import scrapy
class LuooItem(scrapy.Item):
url = scrapy.Field()
title = scrapy.Field()
pic = scrapy.Field()
desc = scrapy.Field()
3):编写spider
在spiders文件夹下建立luoospider.py
import scrapy
from luoo.items import LuooItem class LuooSpider(scrapy.Spider):
name = "luoo"
allowed_domains = ["luoo.net"]
start_urls = []
for i in range(750,763):
url = 'http://www.luoo.net/music/%s'%(str(i))
start_urls.append(url) def parse(self, response):
item = LuooItem()
item['url'] = response.url
item['title'] = response.xpath('//span[@class="vol-title"]/text()').extract()
item['pic'] = response.xpath('//img[@class="vol-cover"]/@src').extract()
item['desc'] = response.xpath('//div[@class="vol-desc"]/text()').extract()
return item
4)pipelines.py不动
5)在command中进入luoo目录
scrapy list 列出可用的爬虫(luoo)
scrapy crawl luoo -o result.csv(执行爬虫并且以result.csv保存到当前目录下)
6)用notepad++打开result.py并且更改格式为ANSI后保存,再用excel打开就不会有乱码了 *遗留to do:
1)数据考虑后期迁移到mysql数据库
2)单独把图片保存到图片格式的文件夹中 memory:顺便附上两个月前用urllib库实现的此功能代码(python3.4)
现在看看用scrapy真的是方便太多了,更别提其牛逼呼呼的可扩展性:
import urllib.request
import re
import time def openurl(urls):
htmls=[]
for url in urls:
req=urllib.request.Request(url)
req.add_header('User-Agent','Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/44.0.2403.107 Safari/537.36')
# Mozilla/5.0 (Windows NT 6.1; WOW64; rv:40.0) Gecko/20100101 Firefox/40.0
response = urllib.request.urlopen(url)
htmls.append(response.read())
time.sleep(5)
return htmls def jiexi(htmls):
pics=[]
titles=[]
contents=[]
for html in htmls:
html = html.decode('utf-8')
pics.append(re.findall('<div class="player-wrapper".*?>.*?<img.*?src="(.*?).jp.*?".*?alt=".*"',html,re.S))
titles.append(re.findall('class="vol-title">(.*?)</span>',html,re.S))
contents.append(re.findall('<div.*?class="vol-desc">.*?(.*?)</div>',html,re.S)) i = len(titles)
with open('C:\\Users\\Administrator\\Desktop\\test.txt', 'w') as f:
for x in range(i):
print("正在下载期刊:%d" %(746-x))
f.write("期刊名:"+str(titles[x])[2:-2]+"\n")
f.write("图片链接:"+str(pics[x])[2:-2]+".jpg\n")
content = str(contents[x])[4:-2]
content.strip()
print(content.count("""<br>\n"""))
content.replace("""<br>\n""","#")
f.write("配诗:"+content+"\n\n\n") yur='http://www.luoo.net/music/'
urls = []
for i in range(657,659):
urls.append(yur + str(i)) htmls = openurl(urls)
pics = jiexi(htmls)