Yet another easy-to-understand, easy-to-use aws s3 python sdk code examples.

github地址:https://github.com/garyelephant/aws-s3-python-sdk-examples.

"""
Yet another s3 python sdk example.
based on boto 2.27.0
""" import time
import os
import urllib import boto.s3.connection
import boto.s3.key def test():
print '--- running AWS s3 examples ---'
c = boto.s3.connection.S3Connection('<YOUR_AWS_ACCESS_KEY>', '<YOUR_AWS_SECRET_KEY>') print 'original bucket number:', len(c.get_all_buckets()) bucket_name = 'yet.another.s3.example.code'
print 'creating a bucket:', bucket_name
try:
bucket = c.create_bucket(bucket_name)
except boto.exception.S3CreateError as e:
print ' ' * 4, 'error occured:'
print ' ' * 8, 'http status code:', e.status
print ' ' * 8, 'reason:', e.reason
print ' ' * 8, 'body:', e.body
return test_bucket_name = 'no.existence.yet.another.s3.example.code'
print 'if you just want to know whether the bucket(\'%s\') exists or not' % (test_bucket_name,), \
'and don\'t want to get this bucket'
try:
test_bucket = c.head_bucket(test_bucket_name)
except boto.exception.S3ResponseError as e:
if e.status == 403 and e.reason == 'Forbidden':
print ' ' * 4, 'the bucket(\'%s\') exists but you don\'t have the permission.' % (test_bucket_name,)
elif e.status == 404 and e.reason == 'Not Found':
print ' ' * 4, 'the bucket(\'%s\') doesn\'t exist.' % (test_bucket_name,) print 'or use lookup() instead of head_bucket() to do the same thing.', \
'it will return None if the bucket does not exist instead of throwing an exception.'
test_bucket = c.lookup(test_bucket_name)
if test_bucket is None:
print ' ' * 4, 'the bucket(\'%s\') doesn\'t exist.' % (test_bucket_name,) print 'now you can get the bucket(\'%s\')' % (bucket_name,)
bucket = c.get_bucket(bucket_name) print 'add some objects to bucket ', bucket_name
keys = ['sample.txt', 'notes/2006/January/sample.txt', 'notes/2006/February/sample2.txt',\
'notes/2006/February/sample3.txt', 'notes/2006/February/sample4.txt', 'notes/2006/sample5.txt']
print ' ' * 4, 'these key names are:'
for name in keys:
print ' ' * 8, name filename = './_test_dir/sample.txt'
print ' ' * 4, 'you can contents of object(\'%s\') from filename(\'%s\')' % (keys[0], filename,)
key = boto.s3.key.Key(bucket, keys[0])
bytes_written = key.set_contents_from_filename(filename)
assert bytes_written == os.path.getsize(filename), ' error occured:broken file' print ' ' * 4, 'or set contents of object(\'%s\') by opened file object' % (keys[1],)
fp = open(filename, 'r')
key = boto.s3.key.Key(bucket, keys[1])
bytes_written = key.set_contents_from_file(fp)
assert bytes_written == os.path.getsize(filename), ' error occured:broken file' print ' ' * 4, 'you can also set contents the remaining key objects from string'
for name in keys[2:]:
print ' ' * 8, 'key:', name
key = boto.s3.key.Key(bucket, name)
s = 'This is the content of %s ' % (name,)
key.set_contents_from_string(s)
print ' ' * 8, '..contents:', key.get_contents_as_string()
# use get_contents_to_filename() to save contents to a specific file in the filesystem. #print 'You have %d objects in bucket %s' % () print 'list all objects added into \'%s\' bucket' % (bucket_name,)
objs = bucket.list()
for key in objs:
print ' ' * 4, key.name p = 'notes/2006/'
print 'list objects start with \'%s\'' % (p,)
objs = bucket.list(prefix = p)
for key in objs:
print ' ' * 4, key.name print 'list objects or key prefixs like \'%s/*\', something like what\'s in the top of \'%s\' folder ?' % (p, p,)
objs = bucket.list(prefix = p, delimiter = '/')
for key in objs:
print ' ' * 4, key.name keys_per_page = 4
print 'manually handle the results paging from s3,', ' number of keys per page:', keys_per_page
print ' ' * 4, 'get page 1'
objs = bucket.get_all_keys(max_keys = keys_per_page)
for key in objs:
print ' ' * 8, key.name print ' ' * 4, 'get page 2'
last_key_name = objs[-1].name #last key of last page is the marker to retrive next page.
objs = bucket.get_all_keys(max_keys = keys_per_page, marker = last_key_name)
for key in objs:
print ' ' * 8, key.name
"""
get_all_keys() a lower-level method for listing contents of a bucket.
This closely models the actual S3 API and requires you to manually handle the paging of results.
For a higher-level method that handles the details of paging for you, you can use the list() method.
""" print 'you must delete all objects in the bucket \'%s\' before delete this bucket' % (bucket_name, )
print ' ' * 4, 'you can delete objects one by one'
bucket.delete_key(keys[0])
print ' ' * 4, 'or you can delete multiple objects using a single HTTP request with delete_keys().'
bucket.delete_keys(keys[1:]) print 'now you can delete the bucket \'%s\'' % (bucket_name,)
c.delete_bucket(bucket) #references:
# [1] http://docs.pythonboto.org/
# [2] amazon s3 api references if __name__ == '__main__':
test()

转载本文请注明作者和出处[Gary的影响力]http://garyelephant.me,请勿用于不论什么商业用途!

Author: Gary Gao( garygaowork[at]gmail.com) 关注互联网、分布式、高性能、NoSQL、自己主动化、软件团队

支持我的工作:  https://me.alipay.com/garygao

AWS s3 python sdk code examples的更多相关文章

  1. aws s3 python sdk

    http://boto3.readthedocs.io/en/latest/reference/services/s3.html#S3.Client.get_object abort_multipar ...

  2. Edit Static Web File Http Header Metadata of AWS S3 from SDK | SDK编程方式编辑储存在AWS S3中Web类文件的Http Header元数据

    1.Motivation | 起因 A requirement from the product department requires download image from AWS S3 buck ...

  3. Python使用boto3操作AWS S3中踩过的坑

    最近在AWS上开发部署应用. 看了这篇关于AWS中国区填坑的文章,结合自己使用AWS的经历,补充两个我自己填的坑. http://www.jianshu.com/p/0d0fd39a40c9?utm_ ...

  4. ceph rgw s3 java sdk 上传大文件分批的方法

    Using the AWS Java SDK for Multipart Upload (High-Level API) Topics Upload a File Abort Multipart Up ...

  5. Ceph RGW服务 使用s3 java sdk 分片文件上传API 报‘SignatureDoesNotMatch’ 异常的定位及规避方案

    import java.io.File;   import com.amazonaws.AmazonClientException; import com.amazonaws.auth.profile ...

  6. aws s3文件上传设置accesskey、secretkey、sessiontoken

    背景: 最近跟进的项目会封装aws S3资源管理细节,对外提供获取文件上传凭证的API,业务方使用获取到的凭证信息直接请求aws进行文件上传.因此,测试过程需要验证S3文件上传的有效性.aws官网有提 ...

  7. Amazon AWS S3 操作手册

    Install the SDK The recommended way to use the AWS SDK for Java in your project is to consume it fro ...

  8. AWS S3 对象存储服务

    虽然亚马逊云非常牛逼,虽然亚马逊云财大气粗,虽然亚马逊用的人也非常多,可是这个文档我简直无法接受,特别是客服,令人发指的回复速度,瞬间让人无语,可是毕竟牛逼.忍了,躺一次坑而已 1.图片上传 1.1 ...

  9. AWS S3使用小结

    使用场景一:储存网站的图片,并能被任何人访问 1. 创建一个bucket,名字与需要绑定的域名一致. 例如,根域名是mysite.com,希望把所有图片放在pic.mysite.com下面,访问的时候 ...

随机推荐

  1. 服务器部署_nginx报错: [warn] conflicting server name "www.test.com" on 0.0.0.0:80, ignored

    今天修改nginx配置文件nginx.conf之后,启动nginx就会报错.经仔细检查是重复配置了 server元素导致, 当nginx检测到重复的 server_name item.test.com ...

  2. Memcached总结一:memcached简介及适用和不适应场景

    Memcached是免费的,开源的,高性能的,分布式内存对象的缓存系统(键/值字典),旨在通过减轻数据库负载加快动态Web应用程序的使用. Memcached是由布拉德·菲茨帕特里克(Brad Fit ...

  3. 插入排序InsertionSort

    /** * * @author Administrator * 功能:插入排序法 */ package com.test1; import java.util.Calendar; public cla ...

  4. asp.net 框架初接触

    1. 在web层的Default.aspx里只有最基本的UI代码 2. 在web层的Default.aspx.cs里第一行创建一个用户业务对象UserBO (注意添加引用,下同) protected ...

  5. Using FireMonkey Layouts

    FireMonkey has many layout controls to choose from. Come learn the differences and how to use them t ...

  6. 145. Binary Tree Postorder Traversal

    题目: Given a binary tree, return the postorder traversal of its nodes' values. For example:Given bina ...

  7. Two-Phase Locking

    两阶段封锁(Two-Phase Locking) 两段锁协议的内容 1. 在对任何数据进行读.写操作之前,事务首先要获得对该数据的封锁 2. 在释放一个封锁之后,事务不再获得任何其他封锁. “两段”锁 ...

  8. VC多文档编程技巧(取消一开始时打开的空白文档)

    VC多文档编程技巧(取消一开始时打开的空白文档) http://blog.csdn.net/crazyvoice/article/details/6185461 VC多文档编程技巧(取消一开始时打开的 ...

  9. 手势识别官方教程(8)拦截触摸事件,得到触摸的属性如速度,距离等,控制view展开

    onInterceptTouchEvent可在onTouchEvent()前拦截触摸事件, ViewConfiguration得到触摸的属性如速度,距离等, TouchDelegate控制view展开 ...

  10. 查找Form的文件名

    我们经常会要在ORACLE EBS中寻找我们正在浏览的form页面的执行文件,我们都会直接在Help中的菜单里点击"About Oracle Application",然后查看当前 ...