目前网上有很多关于scrapy的文章,这里我主要介绍一下我在开发中遇到问题及一些技巧:

1,以登录状态去爬取(带cookie)

 -安装内容:

    brew install phantomjs (MAC上)

    pip install selenium

 -代码:  

 from selenium import webdriver
from selenium.webdriver.common.desired_capabilities import DesiredCapabilities dcap = dict(DesiredCapabilities.PHANTOMJS) # PhantomJS也可以对header进行修改
dcap["phantomjs.page.settings.userAgent"] = (
"Mozilla/5.0 (Linux; U; Android 2.3.6; en-us; Nexus S Build/GRK39F) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1"
)
#通过账号密码获得cookie的函数
def get_cookie_from_aicoin_login(account, password):
browser = webdriver.PhantomJS(executable_path='/usr/local/bin/phantomjs',desired_capabilities=dcap)
browser.get("https://www.aicoin.net.cn/sign_in")
while 'Sign in to AIcoin' in browser.title:
username = browser.find_element_by_name("user_account")#获得用户名标签
username.clear()
username.send_keys(account)#输入用户名 psd = browser.find_element_by_name("user_password")#获得密码标签
psd.clear()
psd.send_keys(password)#输入密码 code = browser.find_element_by_name("user_verify")#获得验证码标签
code.clear()
code_verify = browser.find_element_by_xpath("//button[@class='verify_code']")#部分页面存在验证码错误,需要再次点击刷新获得新的验证码
code_verify.click()
time.sleep(1)
browser.save_screenshot("aa.png") # 对登录页截屏并保存在本地
code_txt = input("请查看路径下新生成的aa.png,然后输入验证码:") # 查看图片后手动输入验证码
code.send_keys(code_txt)#输入验证码
commit = browser.find_element_by_xpath("//div[@class='sure_btn']/button[@type='submit']") # 获得登录按钮
commit.click()#点击提交按钮
time.sleep(3)
cookie = {}
for elem in browser.get_cookies():
cookie[elem["name"]] = elem["value"]
#返回cookie
if 'AICoin - Leader Of Global Cryptocurrency Tickers Application' in browser.title:#验证是否登录成功,成功后会跳转到首页
return json.dumps(cookie)
else:
return {}

特别提示:当需要爬取动态内容(js加载的内容)时,也会用到PHANTOMJS

运行爬虫(scrapy crawl yourspider)需要到cd到该爬虫主目录下即包含scrapy.cfg的目录; 另外调试的时候可以直接使用scrapy shell yoururl 进行代码测试;

2,递归爬取内容

-在scrapy中对应的spider文件中添加如下代码(下面是代码是爬取股吧的帖子和评论)

 from scrapy.http import Request
from gubaspider.items import PostItem,CommentItem class GubaSpider(scrapy.spiders.Spider):
name = "guba"
allowed_domains = ["eastmoney.com"] start_urls = [
"http://guba.eastmoney.com/default_551215.html"
] def parse(self, response):
tmp_list = [] for i in response.xpath('//ul[@class="newlist"]/li'): title = i.xpath('span/a[2]/text()').extract()[0]
ar_url = i.xpath('span/a[2]/@href').extract()[0]
group = i.xpath('span/a[1]/text()').extract()[0]
comment_sum = i.xpath('cite[2]/text()').extract()[0]
read_sum = i.xpath('cite[1]/text()').extract()[0]
author = i.xpath('cite[3]/a/text()').extract()[0]
tmp_list.append({'title':title,'ar_url':ar_url,'group':group,'comment_sum':comment_sum,'read_sum':read_sum,\
'author':author}) for z in tmp_list:
yield Request('http://guba.eastmoney.com' + z.pop('ar_url'), callback=self.parse_article,meta=z,cookies=get_cookie_from_aicoin_login(user,pwd))#通过第一个页面里爬取到url再爬取并可以携带参数和cookie;callback就是爬取新url的方法 def parse_article(self,response):
title = response.meta['title']
group = response.meta['group']
comment_sum = response.meta['comment_sum']
read_sum = response.meta['read_sum']
author = response.meta['author']
content = response.xpath('//div[@id="zwcontent"]/div[@class="zwcontentmain"]/div[@id="zwconbody"]/div[@class="stockcodec"]').extract()
post_time = self.get_node_value(response.xpath('//div[@id="zwcontent"]/div[@id="zwcontt"]/div[@id="zwconttb"]/div[@class="zwfbtime"]/text()').extract())
if post_time != 0:
post_type = post_time.split(' ')[-1]
post_time = post_time[4:24]
good_sum = self.get_node_value(response.xpath('//div[@id="zwcontent"]/div[@class="zwconbtns clearfix"]/div[@id="zwconbtnsi_z"]/span[@id="zwpraise"]/a/span/text()').extract())
transmit_sum = self.get_node_value(response.xpath(
'//div[@id="zwcontent"]/div[@class="zwconbtns clearfix"]/div[@id="zwconbtnsi_zf"]/a/span/text()').extract())
comments = response.xpath('//div[@id="zwlist"]/div[@class="zwli clearfix"]/div[@class="zwlitx"]/div[@class="zwlitxt"]/div[@class="zwlitext stockcodec"]/text()').extract() cm_name = response.xpath('//div[@id="zwlist"]/div[@class="zwli clearfix"]/div[@class="zwlitx"]/div[@class="zwlitxt"]/div[@class="zwlianame"]/span[@class="zwnick"]/a/text()').extract()
time = response.xpath('//div[@id="zwlist"]/div[@class="zwli clearfix"]/div[@class="zwlitx"]/div[@class="zwlitxt"]/div[@class="zwlitime"]/text()').extract()
page_info = response.xpath(
'//div[@id="zwlist"]/div[@class="pager talc zwpager"]/span[@id="newspage"]/@data-page').extract() item = PostItem()
item['Author'] = author # 帖子作者称
item['Title'] = title # 帖子标题
item['Content'] = content # 帖子内容
item['PubTime'] = post_time # 发表时间
item['PostWay'] = post_time if post_time==0 else post_type # 发表方式 网页等
item['Url'] = response.url # 帖子地址
item['Group'] = group # 所属贴吧
item['Like'] = good_sum # 点赞数
item['Transmit'] = transmit_sum # 转发数
item['Comment_Num'] = comment_sum # 评论数
item['Tour'] = read_sum # 浏览数 for x in range(len(cm_name)):
if comments[x]==' ':
if comments[x] == ' ':
s = '//div[@id="zwlist"]/div[' + str(
x + 1) + ']/div[@class="zwlitx"]/div[@class="zwlitxt"]/div[@class="zwlitext stockcodec"]/img/@title'
s = response.xpath(s).extract()
comment = reduce(lambda x, y: x + '|'+y, s) if len(s) > 0 else ''
else:
comment = comments[x]
else:
comment = comments[x]
cm_list.append({'name':cm_name[x],'time':time[x][4:],'comment':comment})
item['Comments'] = cm_list # 回复内容
yield item#存入DB
if len(page_info)>0:
page_info = page_info[0].split('|')
sumpage = int(int(page_info[1])/int(page_info[2]))+1
for p in range(1,sumpage):
cm_url = 'http://guba.eastmoney.com/'+page_info[0]+str(p+1)+'.html'
yield Request(cm_url,callback=self.parse_comment)#再爬取下一个页面

3,将数据存入mongodb

-pipelines文件中添加自定义的pipeline类:

 import pymongo

 class MongoPipeline(object):

     def __init__(self, mongo_uri, mongo_db):
self.mongo_uri = mongo_uri
self.mongo_db = mongo_db @classmethod
def from_crawler(cls, crawler):
return cls(
mongo_uri=crawler.settings.get('MONGO_URI'),
mongo_db=crawler.settings.get('MONGO_DATABASE')
) def open_spider(self, spider):
self.client = pymongo.MongoClient(self.mongo_uri)
self.db = self.client[self.mongo_db] def close_spider(self, spider):
self.client.close() def process_item(self, item, spider):
collection_name = item.__class__.__name__
self.db[collection_name].insert(dict(item))
return item

-items中定义自己item:

 from scrapy import Item,Field

 class PostItem(Item):
Author = Field() # 帖子作者称
Title = Field() # 帖子标题
Content = Field() # 帖子内容
PubTime = Field() # 发表时间
# Top = Field() # 是否顶
PostWay = Field() # 发表方式 网页等
Url = Field() # 帖子地址
Group = Field() # 所属贴吧
Like = Field() # 点赞数
Transmit = Field() # 转发数
Comment_Num = Field() # 评论数
Tour = Field() # 浏览数
Comments = Field() # 回复内容 class CommentItem(Item):
Url = Field() # url
Comments = Field() # 评论

-settings中添加ITEM_PIPELINES

 ITEM_PIPELINES = {
'gubaspider.pipelines.MongoPipeline': 300,
}

4,添加代理和Agent

-在middlewares中添加你定义的中间件类:

 from user_agents import agents#从一个文件导入全部agent
import random class UserAgentMiddleware(object): def process_request(self, request, spider):
agent = random.choice(agents)
request.headers["User-Agent"] = agent#随机agent
request.meta['proxy'] = "http://proxy.yourproxy:8001"#添加代理地址

-在settings中进行中间配置

 DOWNLOADER_MIDDLEWARES = {
'gubaspider.middlewares.UserAgentMiddleware' : 543
}

-user_agents文件包含一个agent列表:


 """ User-Agents """
agents = [
"Mozilla/5.0 (Linux; U; Android 2.3.6; en-us; Nexus S Build/GRK39F) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
"Avant Browser/1.2.789rel1 (http://www.avantbrowser.com)",
"Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/532.5 (KHTML, like Gecko) Chrome/4.0.249.0 Safari/532.5",
"Mozilla/5.0 (Windows; U; Windows NT 5.2; en-US) AppleWebKit/532.9 (KHTML, like Gecko) Chrome/5.0.310.0 Safari/532.9",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/534.7 (KHTML, like Gecko) Chrome/7.0.514.0 Safari/534.7",
"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US) AppleWebKit/534.14 (KHTML, like Gecko) Chrome/9.0.601.0 Safari/534.14",
"Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.14 (KHTML, like Gecko) Chrome/10.0.601.0 Safari/534.14",
"Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US) AppleWebKit/534.20 (KHTML, like Gecko) Chrome/11.0.672.2 Safari/534.20",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/534.27 (KHTML, like Gecko) Chrome/12.0.712.0 Safari/534.27",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.1 (KHTML, like Gecko) Chrome/13.0.782.24 Safari/535.1",
"Mozilla/5.0 (Windows NT 6.0) AppleWebKit/535.2 (KHTML, like Gecko) Chrome/15.0.874.120 Safari/535.2",
"Mozilla/5.0 (Windows NT 6.1; WOW64) AppleWebKit/535.7 (KHTML, like Gecko) Chrome/16.0.912.36 Safari/535.7",
"Mozilla/5.0 (Windows; U; Windows NT 6.0 x64; en-US; rv:1.9pre) Gecko/2008072421 Minefield/3.0.2pre",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US; rv:1.9.0.10) Gecko/2009042316 Firefox/3.0.10",
"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-GB; rv:1.9.0.11) Gecko/2009060215 Firefox/3.0.11 (.NET CLR 3.5.30729)",
"Mozilla/5.0 (Windows; U; Windows NT 6.0; en-US; rv:1.9.1.6) Gecko/20091201 Firefox/3.5.6 GTB5",
"Mozilla/5.0 (Windows; U; Windows NT 5.1; tr; rv:1.9.2.8) Gecko/20100722 Firefox/3.6.8 ( .NET CLR 3.5.30729; .NET4.0E)",
"Mozilla/5.0 (Windows NT 6.1; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
"Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:2.0.1) Gecko/20100101 Firefox/4.0.1",
"Mozilla/5.0 (Windows NT 5.1; rv:5.0) Gecko/20100101 Firefox/5.0",
"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:6.0a2) Gecko/20110622 Firefox/6.0a2",
"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:7.0.1) Gecko/20100101 Firefox/7.0.1",
"Mozilla/5.0 (Windows NT 6.1; WOW64; rv:2.0b4pre) Gecko/20100815 Minefield/4.0b4pre",
"Mozilla/4.0 (compatible; MSIE 5.5; Windows NT 5.0 )",
"Mozilla/4.0 (compatible; MSIE 5.5; Windows 98; Win 9x 4.90)",
"Mozilla/5.0 (Windows; U; Windows XP) Gecko MultiZilla/1.6.1.0a",
"Mozilla/2.02E (Win95; U)",
"Mozilla/3.01Gold (Win95; I)",
"Mozilla/4.8 [en] (Windows NT 5.1; U)",
"Mozilla/5.0 (Windows; U; Win98; en-US; rv:1.4) Gecko Netscape/7.1 (ax)",
"HTC_Dream Mozilla/5.0 (Linux; U; Android 1.5; en-ca; Build/CUPCAKE) AppleWebKit/528.5 (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1",
"Mozilla/5.0 (hp-tablet; Linux; hpwOS/3.0.2; U; de-DE) AppleWebKit/534.6 (KHTML, like Gecko) wOSBrowser/234.40.1 Safari/534.6 TouchPad/1.0",
"Mozilla/5.0 (Linux; U; Android 1.5; en-us; sdk Build/CUPCAKE) AppleWebkit/528.5 (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1",
"Mozilla/5.0 (Linux; U; Android 2.1; en-us; Nexus One Build/ERD62) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile Safari/530.17",
"Mozilla/5.0 (Linux; U; Android 2.2; en-us; Nexus One Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
"Mozilla/5.0 (Linux; U; Android 1.5; en-us; htc_bahamas Build/CRB17) AppleWebKit/528.5 (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1",
"Mozilla/5.0 (Linux; U; Android 2.1-update1; de-de; HTC Desire 1.19.161.5 Build/ERE27) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile Safari/530.17",
"Mozilla/5.0 (Linux; U; Android 2.2; en-us; Sprint APA9292KT Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
"Mozilla/5.0 (Linux; U; Android 1.5; de-ch; HTC Hero Build/CUPCAKE) AppleWebKit/528.5 (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1",
"Mozilla/5.0 (Linux; U; Android 2.2; en-us; ADR6300 Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
"Mozilla/5.0 (Linux; U; Android 2.1; en-us; HTC Legend Build/cupcake) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile Safari/530.17",
"Mozilla/5.0 (Linux; U; Android 1.5; de-de; HTC Magic Build/PLAT-RC33) AppleWebKit/528.5 (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1 FirePHP/0.3",
"Mozilla/5.0 (Linux; U; Android 1.6; en-us; HTC_TATTOO_A3288 Build/DRC79) AppleWebKit/528.5 (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1",
"Mozilla/5.0 (Linux; U; Android 1.0; en-us; dream) AppleWebKit/525.10 (KHTML, like Gecko) Version/3.0.4 Mobile Safari/523.12.2",
"Mozilla/5.0 (Linux; U; Android 1.5; en-us; T-Mobile G1 Build/CRB43) AppleWebKit/528.5 (KHTML, like Gecko) Version/3.1.2 Mobile Safari 525.20.1",
"Mozilla/5.0 (Linux; U; Android 1.5; en-gb; T-Mobile_G2_Touch Build/CUPCAKE) AppleWebKit/528.5 (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1",
"Mozilla/5.0 (Linux; U; Android 2.0; en-us; Droid Build/ESD20) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile Safari/530.17",
"Mozilla/5.0 (Linux; U; Android 2.2; en-us; Droid Build/FRG22D) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
"Mozilla/5.0 (Linux; U; Android 2.0; en-us; Milestone Build/ SHOLS_U2_01.03.1) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile Safari/530.17",
"Mozilla/5.0 (Linux; U; Android 2.0.1; de-de; Milestone Build/SHOLS_U2_01.14.0) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile Safari/530.17",
"Mozilla/5.0 (Linux; U; Android 3.0; en-us; Xoom Build/HRI39) AppleWebKit/525.10 (KHTML, like Gecko) Version/3.0.4 Mobile Safari/523.12.2",
"Mozilla/5.0 (Linux; U; Android 0.5; en-us) AppleWebKit/522 (KHTML, like Gecko) Safari/419.3",
"Mozilla/5.0 (Linux; U; Android 1.1; en-gb; dream) AppleWebKit/525.10 (KHTML, like Gecko) Version/3.0.4 Mobile Safari/523.12.2",
"Mozilla/5.0 (Linux; U; Android 2.0; en-us; Droid Build/ESD20) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile Safari/530.17",
"Mozilla/5.0 (Linux; U; Android 2.1; en-us; Nexus One Build/ERD62) AppleWebKit/530.17 (KHTML, like Gecko) Version/4.0 Mobile Safari/530.17",
"Mozilla/5.0 (Linux; U; Android 2.2; en-us; Sprint APA9292KT Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
"Mozilla/5.0 (Linux; U; Android 2.2; en-us; ADR6300 Build/FRF91) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
"Mozilla/5.0 (Linux; U; Android 2.2; en-ca; GT-P1000M Build/FROYO) AppleWebKit/533.1 (KHTML, like Gecko) Version/4.0 Mobile Safari/533.1",
"Mozilla/5.0 (Linux; U; Android 3.0.1; fr-fr; A500 Build/HRI66) AppleWebKit/534.13 (KHTML, like Gecko) Version/4.0 Safari/534.13",
"Mozilla/5.0 (Linux; U; Android 3.0; en-us; Xoom Build/HRI39) AppleWebKit/525.10 (KHTML, like Gecko) Version/3.0.4 Mobile Safari/523.12.2",
"Mozilla/5.0 (Linux; U; Android 1.6; es-es; SonyEricssonX10i Build/R1FA016) AppleWebKit/528.5 (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1",
"Mozilla/5.0 (Linux; U; Android 1.6; en-us; SonyEricssonX10i Build/R1AA056) AppleWebKit/528.5 (KHTML, like Gecko) Version/3.1.2 Mobile Safari/525.20.1",
]

※ 以上部分代码参考https://github.com/LiuXingMing/SinaSpider


ITEM_PIPELINES

scrapy框架使用笔记的更多相关文章

  1. Scrapy框架学习笔记

    1.Scrapy简介 Scrapy是用纯Python实现一个为了爬取网站数据.提取结构性数据而编写的应用框架,用途非常广泛. 框架的力量,用户只需要定制开发几个模块就可以轻松的实现一个爬虫,用来抓取网 ...

  2. Scrapy 框架进阶笔记

    上一篇简单了解了scrapy各个模块的功能:Scrapy框架初探 -- Dapianzi卡夫卡 在这篇通过一些实例来深入理解 scrapy 的各个对象以及它们是怎么相互协作的 settings.py ...

  3. Scrapy 框架 (学习笔记-1)

    环境: 1.windows 10 2.Python 3.7 3.Scrapy 1.7.3 4.mysql 5.5.53 一.Scrapy 安装 1. Scrapy:是一套基于Twisted的一部处理框 ...

  4. scrapy爬虫框架学习笔记(一)

    scrapy爬虫框架学习笔记(一) 1.安装scrapy pip install scrapy 2.新建工程: (1)打开命令行模式 (2)进入要新建工程的目录 (3)运行命令: scrapy sta ...

  5. PYTHON 爬虫笔记十一:Scrapy框架的基本使用

    Scrapy框架详解及其基本使用 scrapy框架原理 Scrapy是一个为了爬取网站数据,提取结构性数据而编写的应用框架. 其可以应用在数据挖掘,信息处理或存储历史数据等一系列的程序中.其最初是为了 ...

  6. Scrapy 爬虫框架学习笔记(未完,持续更新)

    Scrapy 爬虫框架 Scrapy 是一个用 Python 写的 Crawler Framework .它使用 Twisted 这个异步网络库来处理网络通信. Scrapy 框架的主要架构 根据它官 ...

  7. Scrapy框架详解

    Python网络爬虫Scrapy框架研究 Scrapy1.0教程 Scrapy笔记(1)- 入门篇 Scrapy笔记(2)- 完整示例 Scrapy笔记(3)- Spider详解 Scrapy笔记(4 ...

  8. Windows 10下一步一步创建 Scrapy框架的项目

    此文是本人的学习笔记,网上搜索了很多资料,也走了一些弯路,记录下安装的过程,以便日后回顾 1.安装Anaconda3,安装时默认选项 2.装完Anaconda3后,打开系统变量在path路径下增加An ...

  9. 爬虫Ⅱ:scrapy框架

    爬虫Ⅱ:scrapy框架 step5: Scrapy框架初识 Scrapy框架的使用 pySpider 什么是框架: 就是一个具有很强通用性且集成了很多功能的项目模板(可以被应用在各种需求中) scr ...

随机推荐

  1. Windows 2008 r2上安装MySQL

    用MSI安装包安装 根据自己的操作系统下载对应的32位或64位安装包.按如下步骤操作: MySQL数据库官网的下载地址http://dev.mysql.com/downloads/mysql,第一步: ...

  2. JavaScript学习day01

    一,改变 HTML 内容 (1)方法一: document.getElementById("demo").innerHTML= "My First JavaScriptF ...

  3. synchronized锁级别的一个坑

    在实现一次对限流接口访问时,我错误的使用了单例+synchronized修饰方法的形式实现,这样在限流方规则为不同接口不同限制,单独限制时,同一个实例中的所有被synchronized修饰的方法竞争同 ...

  4. Oracle 11g 概述 chaper1

    关系模型 E-R 模型 范式 1.简述Oracle oracle 是1977  IBM 公司研发的一款强大的数据库软件. 2.关系型数据的基本理论 关系型数据库与数据库管理系统  1)数据库是因为有对 ...

  5. Android-Gradle(三)

    依赖管理是Gradle最闪耀的地方,最好的情景是,你仅仅只需添加一行代码在你的build文件,Gradle会自动从远程仓库为你下载相关的jar包,并且保证你能够正确使用它们.Gradle甚至可以为你做 ...

  6. 命令行创建cocos2d-x的工程

    1. 命令行创建cocos lua工程cocos new MyGame -p com.your_company.mygame -l lua2. 进入工程目录, 编译运行时库cocos compile ...

  7. 深入浅出Java探针技术1--基于java agent的字节码增强案例

    Java agent又叫做Java 探针,本文将从以下四个问题出发来深入浅出了解下Java agent 一.什么是java agent? Java agent是在JDK1.5引入的,是一种可以动态修改 ...

  8. MQ知识点汇总

    1. MQ是什么 2. MQ能做什么 3. 消息模式 4. 使用MQ的时候需要注意什么 5. 常用MQ 6. MQ的不足 7. 什么时候不适用MQ 8. MQ的组成 9. MQ的关注点 1. MQ是什 ...

  9. 在 Azure 上部署 Kubernetes 集群

    在实验.演示的时候,或者是生产过程中,我经常会需要运行一些 Docker 负载.虽然这在本地计算机上十分容易,但是当你要在云端运行的时候就有点困难了.相比于本地运行,在云端运行真的太复杂了.我尝试了几 ...

  10. etcd v3 备份恢复

    备份数据: # ETCDCTL_API=3 etcdctl --endpoints localhost:2379 snapshot save snapshot.db 恢复数据: # ETCDCTL_A ...