Yet another easy-to-understand, easy-to-use aws s3 python sdk code examples.

github地址:https://github.com/garyelephant/aws-s3-python-sdk-examples.

"""
Yet another s3 python sdk example.
based on boto 2.27.0
""" import time
import os
import urllib import boto.s3.connection
import boto.s3.key def test():
print '--- running AWS s3 examples ---'
c = boto.s3.connection.S3Connection('<YOUR_AWS_ACCESS_KEY>', '<YOUR_AWS_SECRET_KEY>') print 'original bucket number:', len(c.get_all_buckets()) bucket_name = 'yet.another.s3.example.code'
print 'creating a bucket:', bucket_name
try:
bucket = c.create_bucket(bucket_name)
except boto.exception.S3CreateError as e:
print ' ' * 4, 'error occured:'
print ' ' * 8, 'http status code:', e.status
print ' ' * 8, 'reason:', e.reason
print ' ' * 8, 'body:', e.body
return test_bucket_name = 'no.existence.yet.another.s3.example.code'
print 'if you just want to know whether the bucket(\'%s\') exists or not' % (test_bucket_name,), \
'and don\'t want to get this bucket'
try:
test_bucket = c.head_bucket(test_bucket_name)
except boto.exception.S3ResponseError as e:
if e.status == 403 and e.reason == 'Forbidden':
print ' ' * 4, 'the bucket(\'%s\') exists but you don\'t have the permission.' % (test_bucket_name,)
elif e.status == 404 and e.reason == 'Not Found':
print ' ' * 4, 'the bucket(\'%s\') doesn\'t exist.' % (test_bucket_name,) print 'or use lookup() instead of head_bucket() to do the same thing.', \
'it will return None if the bucket does not exist instead of throwing an exception.'
test_bucket = c.lookup(test_bucket_name)
if test_bucket is None:
print ' ' * 4, 'the bucket(\'%s\') doesn\'t exist.' % (test_bucket_name,) print 'now you can get the bucket(\'%s\')' % (bucket_name,)
bucket = c.get_bucket(bucket_name) print 'add some objects to bucket ', bucket_name
keys = ['sample.txt', 'notes/2006/January/sample.txt', 'notes/2006/February/sample2.txt',\
'notes/2006/February/sample3.txt', 'notes/2006/February/sample4.txt', 'notes/2006/sample5.txt']
print ' ' * 4, 'these key names are:'
for name in keys:
print ' ' * 8, name filename = './_test_dir/sample.txt'
print ' ' * 4, 'you can contents of object(\'%s\') from filename(\'%s\')' % (keys[0], filename,)
key = boto.s3.key.Key(bucket, keys[0])
bytes_written = key.set_contents_from_filename(filename)
assert bytes_written == os.path.getsize(filename), ' error occured:broken file' print ' ' * 4, 'or set contents of object(\'%s\') by opened file object' % (keys[1],)
fp = open(filename, 'r')
key = boto.s3.key.Key(bucket, keys[1])
bytes_written = key.set_contents_from_file(fp)
assert bytes_written == os.path.getsize(filename), ' error occured:broken file' print ' ' * 4, 'you can also set contents the remaining key objects from string'
for name in keys[2:]:
print ' ' * 8, 'key:', name
key = boto.s3.key.Key(bucket, name)
s = 'This is the content of %s ' % (name,)
key.set_contents_from_string(s)
print ' ' * 8, '..contents:', key.get_contents_as_string()
# use get_contents_to_filename() to save contents to a specific file in the filesystem. #print 'You have %d objects in bucket %s' % () print 'list all objects added into \'%s\' bucket' % (bucket_name,)
objs = bucket.list()
for key in objs:
print ' ' * 4, key.name p = 'notes/2006/'
print 'list objects start with \'%s\'' % (p,)
objs = bucket.list(prefix = p)
for key in objs:
print ' ' * 4, key.name print 'list objects or key prefixs like \'%s/*\', something like what\'s in the top of \'%s\' folder ?' % (p, p,)
objs = bucket.list(prefix = p, delimiter = '/')
for key in objs:
print ' ' * 4, key.name keys_per_page = 4
print 'manually handle the results paging from s3,', ' number of keys per page:', keys_per_page
print ' ' * 4, 'get page 1'
objs = bucket.get_all_keys(max_keys = keys_per_page)
for key in objs:
print ' ' * 8, key.name print ' ' * 4, 'get page 2'
last_key_name = objs[-1].name #last key of last page is the marker to retrive next page.
objs = bucket.get_all_keys(max_keys = keys_per_page, marker = last_key_name)
for key in objs:
print ' ' * 8, key.name
"""
get_all_keys() a lower-level method for listing contents of a bucket.
This closely models the actual S3 API and requires you to manually handle the paging of results.
For a higher-level method that handles the details of paging for you, you can use the list() method.
""" print 'you must delete all objects in the bucket \'%s\' before delete this bucket' % (bucket_name, )
print ' ' * 4, 'you can delete objects one by one'
bucket.delete_key(keys[0])
print ' ' * 4, 'or you can delete multiple objects using a single HTTP request with delete_keys().'
bucket.delete_keys(keys[1:]) print 'now you can delete the bucket \'%s\'' % (bucket_name,)
c.delete_bucket(bucket) #references:
# [1] http://docs.pythonboto.org/
# [2] amazon s3 api references if __name__ == '__main__':
test()

转载本文请注明作者和出处[Gary的影响力]http://garyelephant.me,请勿用于不论什么商业用途!

Author: Gary Gao( garygaowork[at]gmail.com) 关注互联网、分布式、高性能、NoSQL、自己主动化、软件团队

支持我的工作:  https://me.alipay.com/garygao

AWS s3 python sdk code examples的更多相关文章

  1. aws s3 python sdk

    http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.get_object abort_multipar ...

  2. Edit Static Web File Http Header Metadata of AWS S3 from SDK | SDK编程方式编辑储存在AWS S3中Web类文件的Http Header元数据

    1.Motivation | 起因 A requirement from the product department requires download image from AWS S3 buck ...

  3. Python使用boto3操作AWS S3中踩过的坑

    最近在AWS上开发部署应用. 看了这篇关于AWS中国区填坑的文章,结合自己使用AWS的经历,补充两个我自己填的坑. http://www.jianshu.com/p/0d0fd39a40c9?utm_ ...

  4. ceph rgw s3 java sdk 上传大文件分批的方法

    Using the AWS Java SDK for Multipart Upload (High-Level API) Topics Upload a File Abort Multipart Up ...

  5. Ceph RGW服务 使用s3 java sdk 分片文件上传API 报‘SignatureDoesNotMatch’ 异常的定位及规避方案

    import java.io.File;   import com.amazonaws.AmazonClientException; import com.amazonaws.auth.profile ...

  6. aws s3文件上传设置accesskey、secretkey、sessiontoken

    背景: 最近跟进的项目会封装aws S3资源管理细节,对外提供获取文件上传凭证的API,业务方使用获取到的凭证信息直接请求aws进行文件上传.因此,测试过程需要验证S3文件上传的有效性.aws官网有提 ...

  7. Amazon AWS S3 操作手册

    Install the SDK The recommended way to use the AWS SDK for Java in your project is to consume it fro ...

  8. AWS S3 对象存储服务

    虽然亚马逊云非常牛逼,虽然亚马逊云财大气粗,虽然亚马逊用的人也非常多,可是这个文档我简直无法接受,特别是客服,令人发指的回复速度,瞬间让人无语,可是毕竟牛逼.忍了,躺一次坑而已 1.图片上传 1.1 ...

  9. AWS S3使用小结

    使用场景一:储存网站的图片,并能被任何人访问 1. 创建一个bucket,名字与需要绑定的域名一致. 例如,根域名是mysite.com,希望把所有图片放在pic.mysite.com下面,访问的时候 ...

随机推荐

  1. PIXLCLOUND

    http://pixlcloud.com/main/career/ https://www.recordedfuture.com/siem-threat-intelligence-part-1/

  2. easyui源码翻译1.32--Menu(菜单)

    前言 使用$.fn.menu.defaults重写默认值对象.下载该插件翻译源码 菜单组件通常用于快捷菜单.他是构建其他菜单组件的必备基础组件.比如:menubutton和splitbutton.它还 ...

  3. Qt for PC,Qt for iOS,Qt for Android (居士的博客)

    http://blog.csdn.net/Esonpo/article/details/38081607 http://blog.csdn.net/Esonpo/article/details/380 ...

  4. Android 关于HttpClient上传中文乱码的解决办法

    使用过HttpClient的人都知道可以通过addTextBody方法来添加要上传的文本信息,但是,如果要上传中文的话,或还有中文名称的文件会出现乱码的问题,解决办法其实很简单: 第一步:设置Mult ...

  5. TigerLeapMC V1.3 for Windows(支持DLNA)

    TigerLeapMC V1.3 2014-04-10: 1.更新tlplayer TigerLeapMC是基于tlplayer作为播放器的集成DLNA,(DMS,DMR,DMP)等,支持各种网络播放 ...

  6. BZOJ_2049_[Sdoi_2008]_Cave_洞穴勘测_(LCT/并查集)

    描述 http://www.lydsy.com/JudgeOnline/problem.php?id=2049 给出一个森林,起始互不相连,现在有link和cut两种操作,问x,y是否在一棵树里. 分 ...

  7. Axis2联接WCF(比较完整的版本)

    Axis2联接WCF(比较完整的版本) 分basicHttpBinding和wsHttpBinding两种情况: 一.basicHttpBinding比较简单一点,先来看看它所要求的HTTP包:POS ...

  8. Problems encountered while deleting resources.

    Error The project was not built due to “Problems encountered while deleting resources.”. Fix the pro ...

  9. QTP常见问题解决方法(一)

    1.对脚本的运行速度进行设置 TOOLS->OPTIONS->RUN->RUN MODE 设置就可以了:一般可以设置为500或者1000值,也就是毫秒: QTP 12.0版本: TO ...

  10. c pvr转存pvr.ccz格式

    pvr.ccz 是把pvr用zlib算法压缩后的图像格式,其优点是可以提升文件读取效率. 大多数情况下我们可以用一些工具来将pvr压缩到pvr.ccz ,下面提供一个c++方法来完成这个过程 int ...