1. 介绍

Scrapy,是基于python的网络爬虫框架,它能从网络上爬下来信息,是data获取的一个好方式。于是想安装下看看。

进到它的官网,安装的介绍页面

https://docs.scrapy.org/en/latest/intro/install.html

2. 失败的安装过程

有3种装法,一个是从pip,一个是从源码编译,一个是从conda

根据之前的知识,pip就已经是集成在python中的包管理工具,最简单明了,就忽视了官网介绍界面的一句话

Note that sometimes this may require solving compilation issues for some Scrapy dependencies depending on your operating system

结果在编译阶段报了很多错误,解决一个还有一个。

然后就放弃了,从源码编译,跟pip一样,也是一堆编译错误。

3. conda方式安装

没办法,就去看conda,下载了个miniconda,60多M吧。仔细一研究结果爽死了。

可能python也注意到了它的包下载下来需要编译,编译的话需要依赖自己OS的环境配置,经常出错的这个问题。

miniconda是个已经安装好了python的一个集成环境,等于下载安装好了miniconda也就是下载好了基本的python核心程序,然后可以通过conda命令来来下载conda已经编译好的包来做功能扩展。也就是scrapy包以及它依赖的lxml,twisted等编译的我半死的包都是已经跟编译好的。那下载下来直接用就可以了。

conda install -c conda-forge scrapy

https://conda.io/docs/install/quick.html

https://conda.io/miniconda.html

English Version

1. Introduction

Scrapy, it's a network crawler framework based on Python, which is able to download infomation from Internet, so it's a good way to obtain original data.

For better understanding towards Scrapy, I found the installation instruction on below official website and try to install scrapy framework.

https://docs.scrapy.org/en/latest/intro/install.html

2. Failure experience of installion

Before install Scrapy framework, there must be Python environment in your computer, Scrapy is one of Python extension packages from view of Python.

If Python env is already here, then there are 3 ways to install Scrapy package: 1 is thru pip, 2 is to compile dependencies from source code, 3 is thru conda.

Based on my previous experience and knowledge, pip is the package management tool that already integrated in python env. It's quite straightforward to use pip for installation. However I had overlooked one important note from official website, which is

Note that sometimes this may require solving compilation issues for some Scrapy dependencies depending on your operating system

As a result, there were many compilation errors during the denpendencies installation process, when you solved one, another error occurred. Therefore i tried second installation method but still get the same result as first method.

3. Install thru conda

The last option for me is to install Scrapy thru conda. I found conda offitial website, and downloaded miniconda as per instruction, around 60 Megabytes. After install and run the tool, it is really cool and make things simple. It might noticed that dependencies complilation issue always drive people craze, as it depends on the OS environment.

Conda is a integrated Python environment with core Python packages. Users who want to install packages just need to download those packages instead of compile them locally, such as lxml, twisted Scrapy dependencies packages. All extension packages have been compiled on Conda server, therefore, it avoid the issue that happened above.

The package download or so called installation syntax is as below

conda install -c conda-forge scrapy

https://conda.io/docs/install/quick.html

https://conda.io/miniconda.html

从零安装Scrapy心得 | Install Python Scrapy from scratch的更多相关文章

  1. python Scrapy安装和介绍

    python Scrapy安装和介绍 Windows7下安装1.执行easy_install Scrapy Centos6.5下安装 1.库文件安装yum install libxslt-devel ...

  2. Python Scrapy安装杂症记录

    昨天安装了scrapy一切正常,调试了bbsSpider案例(详见上文),今日开机因为冰封还原,提示找不到python27.dll,重新安装了python2.7, 使用easy-install scr ...

  3. Python Scrapy安装

    直接安装scrapy 各种报错,后来各种百度终于解决了,如下是亲身的经历. pip install scrapy 这样直接会报错. 第一步: 先安装wheel pip install wheel 第二 ...

  4. Python -- Scrapy 框架简单介绍(Scrapy 安装及项目创建)

    Python -- Scrapy 框架简单介绍 最近在学习python 爬虫,先后了解学习urllib.urllib2.requests等,后来发现爬虫也有很多框架,而推荐学习最多就是Scrapy框架 ...

  5. Python Scrapy在windows上的安装方法

    如果想要学习爬虫,肯定会了解Scrapy,但安装Scrapy的方法相对于安装其他第三方库麻烦一点. 下面总结一下在我的电脑上安装Scrapy的方法,我的电脑是Windows10,32位操作系统.有如下 ...

  6. [已解决]报错: Python Scrapy - service_identity(opentype) not working and cannot install

    解决:更新安装service_identity pip3 install service_identity --force --upgrade

  7. Python.Scrapy.14-scrapy-source-code-analysis-part-4

    Scrapy 源代码分析系列-4 scrapy.commands 子包 子包scrapy.commands定义了在命令scrapy中使用的子命令(subcommand): bench, check, ...

  8. windows下,python+scrapy环境搭建

    •安装lxml(官网给出的地址http://www.lfd.uci.edu/~gohlke/pythonlibs/#lxml,下载whl文件安装) •安装zope.interface https:// ...

  9. python scrapy 基础

    scrapy是用python写的一个库,使用它可以方便的抓取网页. 主页地址http://scrapy.org/ 文档 http://doc.scrapy.org/en/latest/index.ht ...

随机推荐

  1. 《高级软件测试》JIRA使用手册(二)JIRA安装

    Jira Software 下载地址 Windows系统x86平台:https://downloads.atlassian.com/software/jira/downloads/atlassian- ...

  2. vue mint-ui 三级地址联动

    我也是第一次写这种地址联动的 刚开始的时候 我还以为直接用select来写 后来公司的ios告知并不是这样的 他说应该时这样的 于是第一想法 赶紧找插件吧 但是找了一会未果  就问了公司大神 他刚开始 ...

  3. 《javascript设计模式与开发实践》阅读笔记(12)—— 享元模式

    享元模式 享元(flyweight)模式是一种用于性能优化的模式,"fly"在这里是苍蝇的意思,意为蝇量级.享元模式的核心是运用共享技术来有效支持大量细粒度的对象. 享元模式的核心 ...

  4. datable转xml

    /// <summary> /// datatable转换xml /// </summary> /// <param name="xmlDS"> ...

  5. 微信号的openid的深入理解

    header('Location:https://open.weixin.qq.com/connect/oauth2/authorize?appid='.$this->appid.'&r ...

  6. 看到一个对CAP简单的解释

    一个分布式系统里面,节点组成的网络本来应该是连通的.然而可能因为一些故障,使得有些节点之间不连通了,整个网络就分成了几块区域.数据就散布在了这些不连通的区域中.这就叫分区.当你一个数据项只在一个节点中 ...

  7. vue组件详解(三)——组件通信

    组件之间通信可以用下图表示: 组件关系可分为父子组件通信.兄弟组件通信.跨级组件通信. 一.自定义事件 当子组件需要向父组件传递数据时,就要用到自定义事件. 子组件用$emit ()来触发事件,父组件 ...

  8. python网络爬虫与信息提取 学习笔记day3

    Day3: 只需两行代码解析html或xml信息    具体代码实现:day3_1    注意BeautifulSoup的B和S需要大写,因为python大小写敏感 import requests r ...

  9. Android:后台给button绑定onClick事件、当返回项目到手机页面时提示是否退出APP

    上一篇文章我们学习了android通过findViewById的方式查找控件,本章将了解button控件,及btton如何绑定控件. 通过android的ui设计工具设计一个登录页面: <Rel ...

  10. hdu1052 Tian Ji -- The Horse Racing---田忌赛马贪心

    题目链接: http://acm.hdu.edu.cn/showproblem.php?pid=1052 题目大意: 田忌和齐王各有N匹马,判断怎样比赛,使田忌净胜场数最多. 思路: 一开始贪心出错, ...