虽然亚马逊云非常牛逼,虽然亚马逊云财大气粗,虽然亚马逊用的人也非常多,可是这个文档我简直无法接受,特别是客服,令人发指的回复速度,瞬间让人无语,可是毕竟牛逼.忍了,躺一次坑而已

1.图片上传

1.1

S3 Java SDK 分两个版本,1.0和2.0 , 1.0 的S3对象是AmazonS3 ,2.0的忘记了,下面的示例代码,是1.0版本的

1.2导包

<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-bom</artifactId>
<version>1.11.558</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement> <dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-s3</artifactId>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-s3control</artifactId>
</dependency>
</dependencies>

1.3代码,已经可以运行了,但是还需要配置Id和秘钥

public static void main(String[] args) throws Exception {
String clientRegion = "ap-east-1";
String bucketName = "hotupdates";
String keyName = "abc/demo.txt";
String filePath = "C:\\Users\\Owner\\Desktop\\demo.txt"; try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(s3Client)
.build();
Upload upload = tm.upload(bucketName, keyName, new File(filePath));
System.out.println("Object upload started");
upload.waitForCompletion();
tm.shutdownNow();
System.out.println("Object upload complete");
}
catch(AmazonServiceException e) {
e.printStackTrace();
}
catch(SdkClientException e) {
e.printStackTrace();
}
}

1.4 配置ID和秘钥

Windows 在C:\Users\xxx\.aws这个文件下面创建两个文件,没有后缀的哟

文件1:config

[default]
region = ap-east-1

文件2:credentials

[default]
aws_access_key_id=xxxxxxxxxxxxxxxxxxx
aws_secret_access_key=xxxxxxxxxxxxxxxxxxx

1.5 到这就可以运行了

1.6 想要在上传的时候打印进度条

public class HighLevelMultipartUpload2 {

    public static void main(String[] args) throws Exception {
String clientRegion = "ap-east-1";
String bucketName = "hotupdates";
String keyName = "abc/demo.txt";
String filePath = "C:\\Users\\Owner\\Desktop\\demo.txt"; try {
AmazonS3 s3Client = AmazonS3ClientBuilder.standard()
.withRegion(clientRegion)
.withCredentials(new ProfileCredentialsProvider())
.build();
TransferManager tm = TransferManagerBuilder.standard()
.withS3Client(s3Client)
.build(); // TransferManager processes all transfers asynchronously,
// so this call returns immediately.
Upload upload = tm.upload(bucketName, keyName, new File(filePath));
System.out.println("Object upload started"); // Optionally, wait for the upload to finish before continuing.
upload.waitForCompletion(); tm.shutdownNow();
// XferMgrProgress.showTransferProgress(upload);
// XferMgrProgress.waitForCompletion(upload); System.out.println("Object upload complete");
}
catch(AmazonServiceException e) {
e.printStackTrace();
}
catch(SdkClientException e) {
e.printStackTrace();
}
}
}
public class XferMgrProgress
{
// waits for the transfer to complete, catching any exceptions that occur.
public static void waitForCompletion(Transfer xfer)
{
try {
xfer.waitForCompletion();
} catch (AmazonServiceException e) {
System.err.println("Amazon service error: " + e.getMessage());
System.exit(1);
} catch (AmazonClientException e) {
System.err.println("Amazon client error: " + e.getMessage());
System.exit(1);
} catch (InterruptedException e) {
System.err.println("Transfer interrupted: " + e.getMessage());
System.exit(1);
}
} // Prints progress while waiting for the transfer to finish.
public static void showTransferProgress(Transfer xfer)
{
// print the transfer's human-readable description
System.out.println(xfer.getDescription());
// print an empty progress bar...
printProgressBar(0.0);
// update the progress bar while the xfer is ongoing.
do {
try {
Thread.sleep(100);
} catch (InterruptedException e) {
return;
}
// Note: so_far and total aren't used, they're just for
// documentation purposes.
TransferProgress progress = xfer.getProgress();
long so_far = progress.getBytesTransferred();
long total = progress.getTotalBytesToTransfer();
double pct = progress.getPercentTransferred();
eraseProgressBar();
printProgressBar(pct);
} while (xfer.isDone() == false);
// print the final state of the transfer.
Transfer.TransferState xfer_state = xfer.getState();
System.out.println(": " + xfer_state);
} // Prints progress of a multiple file upload while waiting for it to finish.
public static void showMultiUploadProgress(MultipleFileUpload multi_upload)
{
// print the upload's human-readable description
System.out.println(multi_upload.getDescription()); Collection<? extends Upload> sub_xfers = new ArrayList<Upload>();
sub_xfers = multi_upload.getSubTransfers(); do {
System.out.println("\nSubtransfer progress:\n");
for (Upload u : sub_xfers) {
System.out.println(" " + u.getDescription());
if (u.isDone()) {
Transfer.TransferState xfer_state = u.getState();
System.out.println(" " + xfer_state);
} else {
TransferProgress progress = u.getProgress();
double pct = progress.getPercentTransferred();
printProgressBar(pct);
System.out.println();
}
} // wait a bit before the next update.
try {
Thread.sleep(200);
} catch (InterruptedException e) {
return;
}
} while (multi_upload.isDone() == false);
// print the final state of the transfer.
Transfer.TransferState xfer_state = multi_upload.getState();
System.out.println("\nMultipleFileUpload " + xfer_state);
} // prints a simple text progressbar: [##### ]
public static void printProgressBar(double pct)
{
// if bar_size changes, then change erase_bar (in eraseProgressBar) to
// match.
final int bar_size = 40;
final String empty_bar = " ";
final String filled_bar = "########################################";
int amt_full = (int)(bar_size * (pct / 100.0));
System.out.format(" [%s%s]", filled_bar.substring(0, amt_full),
empty_bar.substring(0, bar_size - amt_full));
} // erases the progress bar.
public static void eraseProgressBar()
{
// erase_bar is bar_size (from printProgressBar) + 4 chars.
final String erase_bar = "\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b\b";
System.out.format(erase_bar);
} public static void uploadFileWithListener(String file_path,
String bucket_name, String key_prefix, boolean pause)
{
System.out.println("file: " + file_path +
(pause ? " (pause)" : "")); String key_name = null;
if (key_prefix != null) {
key_name = key_prefix + '/' + file_path;
} else {
key_name = file_path;
} File f = new File(file_path);
TransferManager xfer_mgr = TransferManagerBuilder.standard().build();
try {
Upload u = xfer_mgr.upload(bucket_name, key_name, f);
// print an empty progress bar...
printProgressBar(0.0);
u.addProgressListener(new ProgressListener() {
public void progressChanged(ProgressEvent e) {
double pct = e.getBytesTransferred() * 100.0 / e.getBytes();
eraseProgressBar();
printProgressBar(pct);
}
});
// block with Transfer.waitForCompletion()
XferMgrProgress.waitForCompletion(u);
// print the final state of the transfer.
Transfer.TransferState xfer_state = u.getState();
System.out.println(": " + xfer_state);
} catch (AmazonServiceException e) {
System.err.println(e.getErrorMessage());
System.exit(1);
}
xfer_mgr.shutdownNow();
} public static void uploadDirWithSubprogress(String dir_path,
String bucket_name, String key_prefix, boolean recursive,
boolean pause)
{
System.out.println("directory: " + dir_path + (recursive ?
" (recursive)" : "") + (pause ? " (pause)" : "")); TransferManager xfer_mgr = new TransferManager();
try {
MultipleFileUpload multi_upload = xfer_mgr.uploadDirectory(
bucket_name, key_prefix, new File(dir_path), recursive);
// loop with Transfer.isDone()
XferMgrProgress.showMultiUploadProgress(multi_upload);
// or block with Transfer.waitForCompletion()
XferMgrProgress.waitForCompletion(multi_upload);
} catch (AmazonServiceException e) {
System.err.println(e.getErrorMessage());
System.exit(1);
}
xfer_mgr.shutdownNow();
} public static void main(String[] args)
{
final String USAGE = "\n" +
"Usage:\n" +
" XferMgrProgress [--recursive] [--pause] <s3_path> <local_path>\n\n" +
"Where:\n" +
" --recursive - Only applied if local_path is a directory.\n" +
" Copies the contents of the directory recursively.\n\n" +
" --pause - Attempt to pause+resume the upload. This may not work for\n" +
" small files.\n\n" +
" s3_path - The S3 destination (bucket/path) to upload the file(s) to.\n\n" +
" local_path - The path to a local file or directory path to upload to S3.\n\n" +
"Examples:\n" +
" XferMgrProgress public_photos/cat_happy.png my_photos/funny_cat.png\n" +
" XferMgrProgress public_photos my_photos/cat_sad.png\n" +
" XferMgrProgress public_photos my_photos\n\n"; if (args.length < 2) {
System.out.println(USAGE);
System.exit(1);
} int cur_arg = 0;
boolean recursive = false;
boolean pause = false; // first, parse any switches
while (args[cur_arg].startsWith("--")) {
if (args[cur_arg].equals("--recursive")) {
recursive = true;
} else if (args[cur_arg].equals("--pause")) {
pause = true;
} else {
System.out.println("Unknown argument: " + args[cur_arg]);
System.out.println(USAGE);
System.exit(1);
}
cur_arg += 1;
} // only the first '/' character is of interest to get the bucket name.
// Subsequent ones are part of the key name.
String s3_path[] = args[cur_arg].split("/", 2);
cur_arg += 1; String bucket_name = s3_path[0];
String key_prefix = null;
if (s3_path.length > 1) {
key_prefix = s3_path[1];
} String local_path = args[cur_arg]; // check to see if local path is a directory or file...
File f = new File(args[cur_arg]);
if (f.exists() == false) {
System.out.println("Input path doesn't exist: " + args[cur_arg]);
System.exit(1);
}
else if (f.isDirectory()) {
uploadDirWithSubprogress(local_path, bucket_name, key_prefix,
recursive, pause);
} else {
uploadFileWithListener(local_path, bucket_name, key_prefix, pause);
}
}
}

参考官网:https://docs.aws.amazon.com/zh_cn/sdk-for-java/v1/developer-guide/credentials.html

2.其余的一些配置

2.1跨域

<CORSConfiguration>
 <CORSRule>
   <AllowedOrigin>*</AllowedOrigin>
   <AllowedMethod>PUT</AllowedMethod>
   <AllowedMethod>POST</AllowedMethod>
   <AllowedMethod>DELETE</AllowedMethod>
   <AllowedMethod>GET</AllowedMethod>
   <AllowedMethod>HEAD</AllowedMethod>
   <AllowedHeader>*</AllowedHeader>
  <MaxAgeSeconds>3000</MaxAgeSeconds>
 </CORSRule>
</CORSConfiguration>

2.2 存储桶策略

{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "Stmt1546506260886",
"Effect": "Allow",
"Principal": {
"AWS": "*"
},
"Action": "s3:GetObject",
"Resource": "arn:aws:s3:::apk-online/*"
}
]
}

apk-online 是桶名

AWS S3 对象存储服务的更多相关文章

  1. 【系统设计】S3 对象存储

    在本文中,我们设计了一个类似于 Amazon Simple Storage Service (S3) 的对象存储服务.S3 是 Amazon Web Services (AWS) 提供的一项服务, 它 ...

  2. Swift是一个提供RESTful HTTP接口的对象存储系统,目的是为了提供一个和AWS S3竞争的服务

    Swift是一个提供RESTful HTTP接口的对象存储系统,最初起源于Rackspace的Cloud Files,目的是为了提供一个和AWS S3竞争的服务. Swift于2010年开源,是Ope ...

  3. FreeNAS 11.0 正式发布,提供 S3 兼容的对象存储服务

    FreeNAS 11.0 正式版已发布,该版本带来了新的虚拟化和对象存储功能.FreeNAS 11.0 将 bhyve 虚拟机添加到其受欢迎的 SAN / NAS.Jail 和插件中,让用户可以在 F ...

  4. 对象存储服务-Minio

    Mino 目录 Mino 对象存储服务 Minio 参考 Minio 架构 为什么要用 Minio 存储机制 纠删码 MinIO概念 部署 单机部署: Docker 部署Minio 分布式Minio ...

  5. 轻量对象存储服务——minio

    minio Minio是一个非常轻量的对象存储服务. Github: minio 它本身不支持文件的版本管理.如果有这个需求,可以用 s3git 搭配使用. Github: s3git 安装 mini ...

  6. Golang 调用 aws-sdk 操作 S3对象存储

    Golang 调用 aws-sdk 操作 S3对象存储 前言 因为业务问题,要写一个S3对象存储管理代码,由于一直写Go,所以这次采用了Go,Go嘛,快,自带多线程,这种好处就不用多说了吧. 基础的功 ...

  7. .NET Core AWS S3云存储

    前言 最近有需要用到AWS S3云存储上传附件,这里对利用.NET或.NET Core在调用SDK APi需要注意的一点小问题做个记录,或许能对后续有用到的童鞋提供一点帮助 AWS S3云存储 官方已 ...

  8. 对象存储服务MinIO安装部署分布式及Spring Boot项目实现文件上传下载

    目录 一.MinIO快速入门 1. MinIO简介 2. CentOS7更换成阿里云镜像 3. 安装 3.1 下载 3.2 运行测试 4. 配置脚本执行文件 4.1 创建配置执行文件 4.2 执行 二 ...

  9. COS对象存储服务的使用

    ---------------------------------------------------------------------------------------------[版权申明:本 ...

随机推荐

  1. eclipse快捷键:

    打开快捷键提示: ctrl + shift + L; 自动补全代码: Alt + /; 快速修复: ctrl + 1; 导包: ctrl + shift + o; 格式化代码: ctrl + shif ...

  2. MVC 返回404,返回图片,流到数组,apk信息

    return HttpNotFound(); byte[] buffer0 = QRCode(); return File(buffer0, @"image/jpeg"); // ...

  3. h5调用app中写好的的方法

    做h5页面的时候,总会遇到些不能解决的问题于是就要与app做一些交互, app那边编辑好的方法后我们怎么用js语法去调用app编写好的方法 if(this.$winInfo.shebei == 1){ ...

  4. 【Python基础】条件语句

    Python条件语句是通过一条或多条语句的执行结果(True或者False)来决定执行的代码块. 可以通过下图来简单了解条件语句的执行过程: Python程序语言指定任何非0和非空(null)值为tr ...

  5. visual studio2017 创建Vue项目

    1:打开Visual studio 2017后 按图片操作新建项目 也可以使用快捷键Ctrl+Shift+N  进入创建项目页面 2:选择JavaScript 里的Node.js创建对应的Vue项目 ...

  6. Network-Flow

    //Created by pritry int graph[MAX][MAX]; //原图 int source; //起点,这里为0 int sink; //终点,这里为n-1 int e[MAX] ...

  7. python数据标准化

    def datastandard(): from sklearn import preprocessing import numpy as np x = np.array([ [ 1., -1., 2 ...

  8. Balanced Number

    Balanced Number Time Limit: 10000/5000 MS (Java/Others)    Memory Limit: 65535/65535 K (Java/Others) ...

  9. (15)Spring Boot使用Druid和监控配置【从零开始学Spring Boot】

    Spring Boot 系列博客] 更多查看博客:http://412887952-qq-com.iteye.com/blog Spring Boot默认的数据源是:org.apache.tomcat ...

  10. Cloudera 5.8.3 SolrCloud+HDFS的索引数据备份和恢复。(需重启solr进程。)

    一.备份基于HDFS的solrCloud集合数据 1.确认要备份的solr文件夹. /solr/history_customer_collection_test 2.开启HDFS快照功能. hdfs ...