[python]Mongodb
文档:
http://api.mongodb.com/python/current/tutorial.html
安装:
官网直接下载安装, mac上brew安装的下载太慢, 打算手动安装
使用:
开启服务:
mongod #默认配置开启服务
mongod -- dpath <db path> # 指定数据库文件路径
连接服务:
mongo # 默认配置连接
mongo [options] [db address] [file names (ending in .js)]
图形可视化程序:
https://www.robomongo.org/
shell:
> help
db.help() help on db methods
db.mycoll.help() help on collection methods
sh.help() sharding helpers
rs.help() replica set helpers
help admin administrative help
help connect connecting to a db help
help keys key shortcuts
help misc misc things to know
help mr mapreduce show dbs show database names
show collections show collections in current database
show users show users in current database
show profile show most recent system.profile entries with time >= 1ms
show logs show the accessible logger names
show log [name] prints out the last segment of log in memory, 'global' is default
use <db_name> set current database
db.foo.find() list objects in collection foo
db.foo.find( { a : } ) list objects in foo where a ==
it result of the last line evaluated; use to further iterate
DBQuery.shellBatchSize = x set default number of items to display on shell
exit quit the mongo shell
more helps...
> db.help()
DB methods:
db.adminCommand(nameOrDocument) - switches to 'admin' db, and runs command [just calls db.runCommand(...)]
db.aggregate([pipeline], {options}) - performs a collectionless aggregation on this database; returns a cursor
db.auth(username, password)
db.cloneDatabase(fromhost)
db.commandHelp(name) returns the help for the command
db.copyDatabase(fromdb, todb, fromhost)
db.createCollection(name, {size: ..., capped: ..., max: ...})
db.createView(name, viewOn, [{$operator: {...}}, ...], {viewOptions})
db.createUser(userDocument)
db.currentOp() displays currently executing operations in the db
db.dropDatabase()
db.eval() - deprecated
db.fsyncLock() flush data to disk and lock server for backups
db.fsyncUnlock() unlocks server following a db.fsyncLock()
db.getCollection(cname) same as db['cname'] or db.cname
db.getCollectionInfos([filter]) - returns a list that contains the names and options of the db's collections
db.getCollectionNames()
db.getLastError() - just returns the err msg string
db.getLastErrorObj() - return full status object
db.getLogComponents()
db.getMongo() get the server connection object
db.getMongo().setSlaveOk() allow queries on a replication slave server
db.getName()
db.getPrevError()
db.getProfilingLevel() - deprecated
db.getProfilingStatus() - returns if profiling is on and slow threshold
db.getReplicationInfo()
db.getSiblingDB(name) get the db at the same server as this one
db.getWriteConcern() - returns the write concern used for any operations on this db, inherited from server object if set
db.hostInfo() get details about the server's host
db.isMaster() check replica primary status
db.killOp(opid) kills the current operation in the db
db.listCommands() lists all the db commands
db.loadServerScripts() loads all the scripts in db.system.js
db.logout()
db.printCollectionStats()
db.printReplicationInfo()
db.printShardingStatus()
db.printSlaveReplicationInfo()
db.dropUser(username)
db.repairDatabase()
db.resetError()
db.runCommand(cmdObj) run a database command. if cmdObj is a string, turns it into {cmdObj: }
db.serverStatus()
db.setLogLevel(level,<component>)
db.setProfilingLevel(level,slowms) =off =slow =all
db.setWriteConcern(<write concern doc>) - sets the write concern for writes to the db
db.unsetWriteConcern(<write concern doc>) - unsets the write concern for writes to the db
db.setVerboseShell(flag) display extra information in shell output
db.shutdownServer()
db.stats()
db.version() current version of the server
>
DB methods
> db.mycoll.help()
DBCollection help
db.mycoll.find().help() - show DBCursor help
db.mycoll.bulkWrite( operations, <optional params> ) - bulk execute write operations, optional parameters are: w, wtimeout, j
db.mycoll.count( query = {}, <optional params> ) - count the number of documents that matches the query, optional parameters are: limit, skip, hint, maxTimeMS
db.mycoll.copyTo(newColl) - duplicates collection by copying all documents to newColl; no indexes are copied.
db.mycoll.convertToCapped(maxBytes) - calls {convertToCapped:'mycoll', size:maxBytes}} command
db.mycoll.createIndex(keypattern[,options])
db.mycoll.createIndexes([keypatterns], <options>)
db.mycoll.dataSize()
db.mycoll.deleteOne( filter, <optional params> ) - delete first matching document, optional parameters are: w, wtimeout, j
db.mycoll.deleteMany( filter, <optional params> ) - delete all matching documents, optional parameters are: w, wtimeout, j
db.mycoll.distinct( key, query, <optional params> ) - e.g. db.mycoll.distinct( 'x' ), optional parameters are: maxTimeMS
db.mycoll.drop() drop the collection
db.mycoll.dropIndex(index) - e.g. db.mycoll.dropIndex( "indexName" ) or db.mycoll.dropIndex( { "indexKey" : } )
db.mycoll.dropIndexes()
db.mycoll.ensureIndex(keypattern[,options]) - DEPRECATED, use createIndex() instead
db.mycoll.explain().help() - show explain help
db.mycoll.reIndex()
db.mycoll.find([query],[fields]) - query is an optional query filter. fields is optional set of fields to return.
e.g. db.mycoll.find( {x:} , {name:, x:} )
db.mycoll.find(...).count()
db.mycoll.find(...).limit(n)
db.mycoll.find(...).skip(n)
db.mycoll.find(...).sort(...)
db.mycoll.findOne([query], [fields], [options], [readConcern])
db.mycoll.findOneAndDelete( filter, <optional params> ) - delete first matching document, optional parameters are: projection, sort, maxTimeMS
db.mycoll.findOneAndReplace( filter, replacement, <optional params> ) - replace first matching document, optional parameters are: projection, sort, maxTimeMS, upsert, returnNewDocument
db.mycoll.findOneAndUpdate( filter, update, <optional params> ) - update first matching document, optional parameters are: projection, sort, maxTimeMS, upsert, returnNewDocument
db.mycoll.getDB() get DB object associated with collection
db.mycoll.getPlanCache() get query plan cache associated with collection
db.mycoll.getIndexes()
db.mycoll.group( { key : ..., initial: ..., reduce : ...[, cond: ...] } )
db.mycoll.insert(obj)
db.mycoll.insertOne( obj, <optional params> ) - insert a document, optional parameters are: w, wtimeout, j
db.mycoll.insertMany( [objects], <optional params> ) - insert multiple documents, optional parameters are: w, wtimeout, j
db.mycoll.mapReduce( mapFunction , reduceFunction , <optional params> )
db.mycoll.aggregate( [pipeline], <optional params> ) - performs an aggregation on a collection; returns a cursor
db.mycoll.remove(query)
db.mycoll.replaceOne( filter, replacement, <optional params> ) - replace the first matching document, optional parameters are: upsert, w, wtimeout, j
db.mycoll.renameCollection( newName , <dropTarget> ) renames the collection.
db.mycoll.runCommand( name , <options> ) runs a db command with the given name where the first param is the collection name
db.mycoll.save(obj)
db.mycoll.stats({scale: N, indexDetails: true/false, indexDetailsKey: <index key>, indexDetailsName: <index name>})
db.mycoll.storageSize() - includes free space allocated to this collection
db.mycoll.totalIndexSize() - size in bytes of all the indexes
db.mycoll.totalSize() - storage allocated for all data and indexes
db.mycoll.update( query, object[, upsert_bool, multi_bool] ) - instead of two flags, you can pass an object with fields: upsert, multi
db.mycoll.updateOne( filter, update, <optional params> ) - update the first matching document, optional parameters are: upsert, w, wtimeout, j
db.mycoll.updateMany( filter, update, <optional params> ) - update all matching documents, optional parameters are: upsert, w, wtimeout, j
db.mycoll.validate( <full> ) - SLOW
db.mycoll.getShardVersion() - only for use with sharding
db.mycoll.getShardDistribution() - prints statistics about data distribution in the cluster
db.mycoll.getSplitKeysForChunks( <maxChunkSize> ) - calculates split points over all chunks and returns splitter function
db.mycoll.getWriteConcern() - returns the write concern used for any operations on this collection, inherited from server/db if set
db.mycoll.setWriteConcern( <write concern doc> ) - sets the write concern for writes to the collection
db.mycoll.unsetWriteConcern( <write concern doc> ) - unsets the write concern for writes to the collection
db.mycoll.latencyStats() - display operation latency histograms for this collection
>
Collection methods
> sh.help()
sh.addShard( host ) server:port OR setname/server:port
sh.addShardToZone(shard,zone) adds the shard to the zone
sh.updateZoneKeyRange(fullName,min,max,zone) assigns the specified range of the given collection to a zone
sh.disableBalancing(coll) disable balancing on one collection
sh.enableBalancing(coll) re-enable balancing on one collection
sh.enableSharding(dbname) enables sharding on the database dbname
sh.getBalancerState() returns whether the balancer is enabled
sh.isBalancerRunning() return true if the balancer has work in progress on any mongos
sh.moveChunk(fullName,find,to) move the chunk where 'find' is to 'to' (name of shard)
sh.removeShardFromZone(shard,zone) removes the shard from zone
sh.removeRangeFromZone(fullName,min,max) removes the range of the given collection from any zone
sh.shardCollection(fullName,key,unique,options) shards the collection
sh.splitAt(fullName,middle) splits the chunk that middle is in at middle
sh.splitFind(fullName,find) splits the chunk that find is in at the median
sh.startBalancer() starts the balancer so chunks are balanced automatically
sh.status() prints a general overview of the cluster
sh.stopBalancer() stops the balancer so chunks are not balanced automatically
sh.disableAutoSplit() disable autoSplit on one collection
sh.enableAutoSplit() re-enable autoSplit on one collection
sh.getShouldAutoSplit() returns whether autosplit is enabled
>
sharding helpers
> rs.help()
rs.status() { replSetGetStatus : } checks repl set status
rs.initiate() { replSetInitiate : null } initiates set with default settings
rs.initiate(cfg) { replSetInitiate : cfg } initiates set with configuration cfg
rs.conf() get the current configuration object from local.system.replset
rs.reconfig(cfg) updates the configuration of a running replica set with cfg (disconnects)
rs.add(hostportstr) add a new member to the set with default attributes (disconnects)
rs.add(membercfgobj) add a new member to the set with extra attributes (disconnects)
rs.addArb(hostportstr) add a new member which is arbiterOnly:true (disconnects)
rs.stepDown([stepdownSecs, catchUpSecs]) step down as primary (disconnects)
rs.syncFrom(hostportstr) make a secondary sync from the given member
rs.freeze(secs) make a node ineligible to become primary for the time specified
rs.remove(hostportstr) remove a host from the replica set (disconnects)
rs.slaveOk() allow queries on secondary nodes rs.printReplicationInfo() check oplog size and time range
rs.printSlaveReplicationInfo() check replica set members and replication lag
db.isMaster() check who is primary reconfiguration helpers disconnect from the database so the shell will display
an error, even if the command succeeds.
>
replica set helpers
> help admin
ls([path]) list files
pwd() returns current directory
listFiles([path]) returns file list
hostname() returns name of this host
cat(fname) returns contents of text file as a string
removeFile(f) delete a file or directory
load(jsfilename) load and execute a .js file
run(program[, args...]) spawn a program and wait for its completion
runProgram(program[, args...]) same as run(), above
sleep(m) sleep m milliseconds
getMemInfo() diagnostic
>
administrative help
> help connect Normally one specifies the server on the mongo shell command line. Run mongo --help to see those options.
Additional connections may be opened: var x = new Mongo('host[:port]');
var mydb = x.getDB('mydb');
or
var mydb = connect('host[:port]/mydb'); Note: the REPL prompt only auto-reports getLastError() for the shell command line connection. >
connect db help
> help keys
Tab completion and command history is available at the command prompt. Some emacs keystrokes are available too:
Ctrl-A start of line
Ctrl-E end of line
Ctrl-K del to end of line Multi-line commands
You can enter a multi line javascript expression. If parens, braces, etc. are not closed, you will see a new line
beginning with '...' characters. Type the rest of your expression. Press Ctrl-C to abort the data entry if you
get stuck. >
shotcut keys
> help misc
b = new BinData(subtype,base64str) create a BSON BinData value
b.subtype() the BinData subtype (..)
b.length() length of the BinData data in bytes
b.hex() the data as a hex encoded string
b.base64() the data as a base encoded string
b.toString() b = HexData(subtype,hexstr) create a BSON BinData value from a hex string
b = UUID(hexstr) create a BSON BinData value of UUID subtype
b = MD5(hexstr) create a BSON BinData value of MD5 subtype
"hexstr" string, sequence of hex characters (no 0x prefix) o = new ObjectId() create a new ObjectId
o.getTimestamp() return timestamp derived from first bits of the OID
o.isObjectId
o.toString()
o.equals(otherid) d = ISODate() like Date() but behaves more intuitively when used
d = ISODate('YYYY-MM-DD hh:mm:ss') without an explicit "new " prefix on construction
>
misc
> help mr
See also http://dochub.mongodb.org/core/mapreduce
function mapf() {
// 'this' holds current document to inspect
emit(key, value);
}
function reducef(key,value_array) {
return reduced_value;
}
db.mycollection.mapReduce(mapf, reducef[, options])
options
{[query : <query filter object>]
[, sort : <sort the query. useful for optimization>]
[, limit : <number of objects to return from collection>]
[, out : <output-collection name>]
[, keeptemp: <true|false>]
[, finalize : <finalizefunction>]
[, scope : <object where fields go into javascript global scope >]
[, verbose : true]}
>
mr
python驱动
pip install pymongo
scrapy:
settings.py
ITEM_PIPELINES = ['stack.pipelines.MongoDBPipeline', ] MONGODB_SERVER = "localhost"
MONGODB_PORT = 27017
MONGODB_DB = "stackoverflow"
MONGODB_COLLECTION = "questions"
piplines.py
import pymongo from scrapy.conf import settings
from scrapy.exceptions import DropItem
from scrapy import log class MongoDBPipeline(object): def __init__(self):
connection = pymongo.MongoClient(
settings['MONGODB_SERVER'],
settings['MONGODB_PORT']
)
db = connection[settings['MONGODB_DB']]
self.collection = db[settings['MONGODB_COLLECTION']] def process_item(self, item, spider):
valid = True
for data in item:
if not data:
valid = False
raise DropItem("Missing {0}!".format(data))
if valid:
self.collection.insert(dict(item))
log.msg("Question added to MongoDB database!",
level=log.DEBUG, spider=spider)
return item
scrapy 官方文档 https://doc.scrapy.org/en/latest/topics/item-pipeline.html#write-items-to-mongodb:
piplines.py
import pymongo
class MongoPipeline(object):
collection_name = 'scrapy_items'
def __init__(self, mongo_uri, mongo_db):
self.mongo_uri = mongo_uri
self.mongo_db = mongo_db
@classmethod
def from_crawler(cls, crawler):
return cls(
mongo_uri=crawler.settings.get('MONGO_URI'),
mongo_db=crawler.settings.get('MONGO_DATABASE', 'items')
)
def open_spider(self, spider):
self.client = pymongo.MongoClient(self.mongo_uri)
self.db = self.client[self.mongo_db]
def close_spider(self, spider):
self.client.close()
def process_item(self, item, spider):
self.db[self.collection_name].insert_one(dict(item))
return item
[python]Mongodb的更多相关文章
- Python Mongodb接口
Python Mongodb接口 MongoDB 是一个基于分布式文件存储的数据库.由 C++ 语言编写.旨在为 WEB 应用提供可扩展的高性能数据存储解决方案. 同时,MongoDB 是一个介于关系 ...
- python+MongoDB使用示例
本博客起源于博主的大三NoSQL课程设计,采用python+MongoDB结合方式,将数据从txt文件导入MongoDB之中,再将其取出以作图.主要技术是采用python与MongoDB结合存储读取方 ...
- Python MongoDB 教程
基于菜鸟教程实际操作后总结而来 Python MongoDB MongoDB 是目前最流行的 NoSQL 数据库之一,使用的数据类型 BSON(类似 JSON). MongoDB 数据库安装与介绍可以 ...
- 吴裕雄--天生自然python学习笔记:Python MongoDB
MongoDB 是目前最流行的 NoSQL 数据库之一,使用的数据类型 BSON(类似 JSON). PyMongo Python 要连接 MongoDB 需要 MongoDB 驱动,这里我们使用 P ...
- Python MongoDB使用介绍
MongoDB介绍 MongoDB是一个面向文档的,开源数据库程序,它平台无关.MongoDB像其他一些NoSQL数据库(但不是全部!)使用JSON结构的文档存储数据.这是使得数据非常灵活,不需要的S ...
- python&MongoDB爬取图书馆借阅记录(没有验证码)
题外话:这个爬虫本来是想用java完成然后发布在博客园里的,但是一直用java都失败了,最后看到别人用了python,然后自己就找别人问了问关键的知识点,发现连接那部分,python只用了19行!!! ...
- Python mongoDB 的简单操作
#!/usr/bin/env python # coding:utf-8 # Filename:mongodb.py from pymongo import MongoClient,ASCENDING ...
- windows 7下安装python+mongodb
1. python安装 下载:http://python.org/download/ 直接双击安装,安装完后将路径加入系统环境变量path中. 2. mongodb安装 下载:http://www.m ...
- 数据抓取分析(python + mongodb)
分享点干货!!! Python数据抓取分析 编程模块:requests,lxml,pymongo,time,BeautifulSoup 首先获取所有产品的分类网址: def step(): try: ...
- python数据抓取分析(python + mongodb)
分享点干货!!! Python数据抓取分析 编程模块:requests,lxml,pymongo,time,BeautifulSoup 首先获取所有产品的分类网址: def step(): try: ...
随机推荐
- Spark设置Kryo序列化缓冲区大小
背景 今天在开发SparkRDD的过程中出现Buffer Overflow错误,查看具体Yarn日志后发现是因为Kryo序列化缓冲区溢出了,日志建议调大spark.kryoserializer.buf ...
- Serv_U FTP服务端使用教程
Serv-U FTP Server是一种被广泛运用的FTP服务器端软件,可以设定多个FTP服务器.限定登录用户的权限.登录主目录及空间大小等,功能非常完备.具有非常完备的安全特性,支持SSl FTP传 ...
- Centos7下创建和管理用户
centos服务管理主要命令是systemctl,centos7的服务不再放在/etc/init.d/下;而放在/usr/lib/systemd/system下,centos7系统中systemctl ...
- VS2019菜单栏没有团队选项解决方法
新安装了Visual Studio 2019结果菜单栏没有“团队”菜单,导致没办法连接TFS服务器,查了下网上也并没有对应解决方法(甚至遇见这个问题的都没有/笑哭,所以这个方法写出来也大概没什么用) ...
- 深入理解 Java 并发锁
本文以及示例源码已归档在 javacore 一.并发锁简介 确保线程安全最常见的做法是利用锁机制(Lock.sychronized)来对共享数据做互斥同步,这样在同一个时刻,只有一个线程可以执行某个方 ...
- cogs 2. 旅行计划 dijkstra+打印路径小技巧
2. 旅行计划 ★★ 输入文件:djs.in 输出文件:djs.out 简单对比时间限制:3 s 内存限制:128 MB [题目描述] 过暑假了,阿杜准备出行旅游,他已经查到了某些城市 ...
- cogs 615. 韩国明星 STL map
615. 韩国明星 ★★ 输入文件:star.in 输出文件:star.out 简单对比时间限制:2 s 内存限制:128 MB [问题描述] 在LazyCat同学的影响下,Roby同 ...
- LGV - 求多条不相交路径的方案数
推荐博客 :https://blog.csdn.net/qq_25576697/article/details/81138213 链接:https://www.nowcoder.com/acm/con ...
- JS对JSON的使用【转】
JSON(JavaScript Object Notation)是一种轻量级的数据交换格式,采用完全独立于语言的文本格式,是理想的数据交换格式.同时,JSON是 JavaScript 原生格式,这意味 ...
- Python工具类(二)—— 操作时间相关
#!/usr/bin/env python # -*- coding: utf-8 -*- """ __title__ = '操作时间的工具类' "" ...