Limiting Parallelism
jcalderone
May 22nd, 2006
This blog has moved! Read this post and its comments at its new home. Concurrency can be a great way to speed things up, but what happens when you have too much concurrency? Overloading a system or a network can be detrimental to performance. Often there is a peak in performance at a particular level of concurrency. Executing a particular number of tasks in parallel will be easier than ever with Twisted 2.5 and Python 2.5:

from twisted.internet import defer, task

def parallel(iterable, count, callable, *args, **named):
coop = task.Cooperator()
work = (callable(elem, *args, **named) for elem in iterable)
return defer.DeferredList([coop.coiterate(work) for i in xrange(count)])

Here's an example of using this to save the contents of a bunch of URLs which are listed one per line in a text file, downloading at most fifty at a time:

from twisted.python import log
from twisted.internet import reactor
from twisted.web import client def download((url, fileName)):
return client.downloadPage(url, file(fileName, 'wb')) urls = [(url, str(n)) for (n, url) in enumerate(file('urls.txt'))]
finished = parallel(urls, 50, download)
finished.addErrback(log.err)
finished.addCallback(lambda ign: reactor.stop())
reactor.run()

[Edit: The original generator expression in this post was of the form ((yield foo()) for x in y). The yield here is completely superfluous, of course, so I have removed it.]

from twisted.internet import defer, reactor, task
l=[3,4,5,6]
def f(a):
print a
work = (f(elem) for elem in l)
for i in range(3):
work.next() coop = task.Cooperator()
#work = (callable(elem, *args, **named) for elem in iterable)
d=[coop.coiterate(work) for _ in range(5)]
print d

[<Deferred at 0x1aa0c88 waiting on Deferred at 0x1aa0d50>, <Deferred at 0x1aa0dc8 waiting on Deferred at 0x1aa0e90>, <Deferred at 0x1aa0f30 waiting on Deferred at 0x1aa4030>, <Deferred at 0x1aa40d0 waiting on Deferred at 0x1aa4198>, <Deferred at 0x1aa4238 waiting on Deferred at 0x1aa4300>]

scrapy之parallel的更多相关文章

  1. twisted的task之cooperator和scrapy的parallel()函数

    def handle_spider_output(self, result, request, response, spider): if not result: return defer_succe ...

  2. scrapy item处理----cooperator和parallel()函数

    twisted的task之cooperator和scrapy的parallel()函数 本文是关于下载结果返回后调用item处理的过程实现研究. 从scrapy的结果处理说起 def handle_s ...

  3. [转]使用scrapy进行大规模抓取

    原文:http://www.yakergong.net/blog/archives/500 使用scrapy有大概半年了,算是有些经验吧,在这里跟大家讨论一下使用scrapy作为爬虫进行大规模抓取可能 ...

  4. scrapy爬虫框架setting模块解析

    平时写爬虫的时候并不需要设置setting里所有的参数,今天心血来潮,花了点时间查了一下setting模块创建后自动写入的所有参数的含义,记录一下. 模块相关说明信息 # -*- coding: ut ...

  5. 97、爬虫框架scrapy

    本篇导航: 介绍与安装 命令行工具 项目结构以及爬虫应用简介 Spiders 其它介绍 爬取亚马逊商品信息   一.介绍与安装 Scrapy一个开源和协作的框架,其最初是为了页面抓取 (更确切来说, ...

  6. Python scrapy框架

    Scrapy Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架. 其可以应用在数据挖掘,信息处理或存储历史数据等一系列的程序中.其最初是为了页面抓取 (更确切来说, 网络抓取 )所设 ...

  7. scrapy爬取全部知乎用户信息

    # -*- coding: utf-8 -*- # scrapy爬取全部知乎用户信息 # 1:是否遵守robbots_txt协议改为False # 2: 加入爬取所需的headers: user-ag ...

  8. Scrapy抓取Quotes to Scrape

    # 爬虫主程序quotes.py # -*- coding: utf-8 -*- import scrapy from quotetutorial.items import QuoteItem # 启 ...

  9. scrapy分布式爬虫scrapy_redis一篇

    分布式爬虫原理 首先我们来看一下scrapy的单机架构:     可以看到,scrapy单机模式,通过一个scrapy引擎通过一个调度器,将Requests队列中的request请求发给下载器,进行页 ...

随机推荐

  1. Git忽略规则和.gitignore规则不生效的解决办法

    Git忽略规则和.gitignore规则不生效的解决办法   Git忽略规则: 在git中如果想忽略掉某个文件,不让这个文件提交到版本库中,可以使用修改根目录中 .gitignore 文件的方法(如果 ...

  2. jquery 中prop和 attr

    prop就是给html中元素固有的属性赋值 而attr是给元素定义新的属性值.

  3. LeetCode——9. Palindrome Number

    一.题目链接:https://leetcode.com/problems/palindrome-number/ 二.题目大意: 给定一个整数,判断它是否为一个回文数.(例如-12,它就不是一个回文数: ...

  4. vagrant 本地添加box 支持带版本号

    众所周知,vagrant添加box的时候要从外网下载,那速度...(说多了都是泪),所以只好用下载工具下载到本地之后再添加. 一般处理方案 vagrant box add boxName ./down ...

  5. 传统Java Web(非Spring Boot)、非Java语言项目接入Spring Cloud方案

    技术架构在向spring Cloud转型时,一定会有一些年代较久远的项目,代码已变成天书,这时就希望能在不大规模重构的前提下将这些传统应用接入到Spring Cloud架构体系中作为一个服务以供其它项 ...

  6. oracle数据库归档与非归档

    oracle运行的时候至少需要两组联机日志,每当一组日志写满后会发生日志切换,继续向下一组联机日志写入. 如果是归档模式,则会触发ARCn进程,把切换后的重做日志文件复制到归档日志文件. 如果是非归档 ...

  7. nginx配置location总结及rewrite规则写法 (若配置reload或restart不生效就stop start)

    location正则写法 一个示例: location = / { # 精确匹配 / ,主机名后面不能带任何字符串 [ configuration A ] } location / { # 因为所有的 ...

  8. 总结一下连日来在MAC下被Python3设下的坑

    当时的情况:mac下自带python2, 1.安装pyhon3: 首次从官网下载了安装包安装,安装目录在/Library/Frameworks/Python.framework/Versions/3. ...

  9. Linux入门:常用命令:查看硬盘、分区、CPU、内存信息

    查看硬盘信息 $df -lh    #查看所有硬盘的使用状 $du -sh /etc   #查看etc目录大小 #获得文件大小很方便,主要是目录 外部系统挂载 $mount               ...

  10. CentOS安装redis.tar.gz

    1. # cd /usr/local/src 2. # tar -zxvf redis-3.0.6.tar.gz 3. # cd redis-3.0.6 4.# make 5.#  make PREF ...