爬虫(18)Scrapy简介 |
您所在的位置:网站首页 › scrapy最新版本是什么 › 爬虫(18)Scrapy简介 |
文章目录
第十六章 Scrapy简介
1. 简介
2. 安装scrapy
3. Scrapy工作流程
4. Scrapy的快速入门
5.案例
6. 存储pipelines
第十六章 Scrapy简介
1. 简介
Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架,我们只需要实现少量的代码,就能够快速的抓取。 crapy使用了Twisted异步网络框架,可以加快我们的下载速度。 Scrapy可以把爬虫变得更快更强大。是异步爬虫框架。 优点是可配置,扩展性高。框架是基于异步的。Twisted异步网络框架。单词的意思是扭曲的的,代码里面有很多的闭包,函数嵌套。 介绍网站:http://scrapy-chs.readthedocs.io/zh_CN/1.0/intro/overview.html 2. 安装scrapy这里安装颇费周折,中间出现两个报错,我是用换源安装的。后来百度发现scrapy的安装依赖几个库: lxml、 pyOpenSSL 、 Twisted 、pywin32 第一个我安装过了,pyOpenSSL直接pip install 就可以了。 Twisted这个库的安装很不顺利,也是报错。后来用下载轮子的方法安装成功的。具体步骤: 打开轮子的网站:https://www.lfd.uci.edu/~gohlke/pythonlibs/ 按Ctrl+F打开搜索栏,输入Twisted,点击进入 找到对应的版本,我的python是3.8的,电脑是64位的,所以找到了Twisted‑20.3.0‑cp38‑cp38‑win_amd64.whl这个版本 点击下载到本地 复制文件路径,右键文件,属性,安全,在最上方由文件路径C:\Users\MI\Downloads\Twisted-20.3.0-cp38-cp38-win_amd64.whl,下面是安装命令: pip install C:\Users\MI\Downloads\Twisted-20.3.0-cp38-cp38-win_amd64.whl瞬间完成。 pywin32库是用中国科技大学的源,换源安装的https://pypi.mirrors.ustc.edu.cn/simple/ 最后也用这个源换源安装scrapy pip install scrapy -i https://pypi.mirrors.ustc.edu.cn/simple/回车后瞬间安装成功,截个图: 命令导出所有的安装包来查看。 也可以通过命令 pip freeze -r E:\pip.txt来安装所有的库。 3. Scrapy工作流程这个工作流程如果明白了,就掌握了scrapy的60%了,不仅能够明白,而且要求面试的时候能够给面试官思路清晰的解释出来。 1 引擎整个框架的核心 2 调度器 接受从引擎发过来的url并入列 3 下载器 下载网页源码,返回给爬虫程序 4 项目管道 数据处理 5 下载中间件 处理引擎与下载器之间的请求 6 爬虫中间件 处理爬虫程序响应和输出结果,以及新的请求 7 下载中间件负责引擎与下载器之间的数据传输 8 爬虫中间件负责引擎与爬虫程序之间的数据传输后面会附图 4. Scrapy的快速入门第一步创建Scrapy项目 创建scrapy项目的命令: scrapy startproject mySpidermySpider是项目的名称,可以改变。其他的是固定语句,不可改变。 我们可以在自己的电脑pycharm里创建一个新的文件夹来存放项目,这里我命名为Scrapy_01 并拷贝一下路径,在pycharm终端里cd D:\work\爬虫\Day18\my_code\scrapy回车,然后就进入了项目文件夹。 我们打开项目文件夹Scrapy_01可以看到已经生成了新的文件夹,并且里面已经自动有了几个程序文件了。 第二步创建一个爬虫的程序 cd mySpider scrapy genspider example example.comexample是爬虫程序的名字,example.com是要爬取的网站的域名(范围)。我们可以按照提示操作: You can start your first spider with: cd mySpider scrapy genspider example example.com D:\work\爬虫\Day18\my_code\Scrapy_01>cd mySpider D:\work\爬虫\Day18\my_code\Scrapy_01\mySpider>继续按照提示创建爬虫程序: You can start your first spider with: cd mySpider scrapy genspider example example.com D:\work\爬虫\Day18\my_code\Scrapy_01>cd mySpider D:\work\爬虫\Day18\my_code\Scrapy_01\mySpider>scrapy genspider db douban.com Created spider 'db' using template 'basic' in module: mySpider.spiders.db D:\work\爬虫\Day18\my_code\Scrapy_01\mySpider>我们创建一个豆瓣爬虫程序,前面的db是程序的名字,后面的douban.com是爬取的范围,或叫域名。 Created spider 'db' using template 'basic' in module: mySpider.spiders.db这个语句代表我们的爬虫程序创建成功了。 我们看到在原来的项目文件夹里多出了我们刚才创建的爬虫项目文件db.py
我们点开看看items文件 # Define here the models for your scraped items # # See documentation in: # https://docs.scrapy.org/en/latest/topics/items.html import scrapy class MyspiderItem(scrapy.Item): # define the fields for your item here like: # name = scrapy.Field() pass再点开看看middlewares # Define here the models for your spider middleware # # See documentation in: # https://docs.scrapy.org/en/latest/topics/spider-middleware.html from scrapy import signals # useful for handling different item types with a single interface from itemadapter import is_item, ItemAdapter class MyspiderSpiderMiddleware: # Not all methods need to be defined. If a method is not defined, # scrapy acts as if the spider middleware does not modify the # passed objects. @classmethod def from_crawler(cls, crawler): # This method is used by Scrapy to create your spiders. s = cls() crawler.signals.connect(s.spider_opened, signal=signals.spider_opened) return s def process_spider_input(self, response, spider): # Called for each response that goes through the spider # middleware and into the spider. # Should return None or raise an exception. return None def process_spider_output(self, response, result, spider): # Called with the results returned from the Spider, after # it has processed the response. # Must return an iterable of Request, or item objects. for i in result: yield i def process_spider_exception(self, response, exception, spider): # Called when a spider or process_spider_input() method # (from other spider middleware) raises an exception. # Should return either None or an iterable of Request or item objects. pass def process_start_requests(self, start_requests, spider): # Called with the start requests of the spider, and works # similarly to the process_spider_output() method, except # that it doesn’t have a response associated. # Must return only requests (not items). for r in start_requests: yield r def spider_opened(self, spider): spider.logger.info('Spider opened: %s' % spider.name) class MyspiderDownloaderMiddleware: # Not all methods need to be defined. If a method is not defined, # scrapy acts as if the downloader middleware does not modify the # passed objects. @classmethod def from_crawler(cls, crawler): # This method is used by Scrapy to create your spiders. s = cls() crawler.signals.connect(s.spider_opened, signal=signals.spider_opened) return s def process_request(self, request, spider): # Called for each request that goes through the downloader # middleware. # Must either: # - return None: continue processing this request # - or return a Response object # - or return a Request object # - or raise IgnoreRequest: process_exception() methods of # installed downloader middleware will be called return None def process_response(self, request, response, spider): # Called with the response returned from the downloader. # Must either; # - return a Response object # - return a Request object # - or raise IgnoreRequest return response def process_exception(self, request, exception, spider): # Called when a download handler or a process_request() # (from other downloader middleware) raises an exception. # Must either: # - return None: continue processing this exception # - return a Response object: stops process_excepti |
今日新闻 |
推荐新闻 |
CopyRight 2018-2019 办公设备维修网 版权所有 豫ICP备15022753号-3 |