Amazon AWS S3 操作手册
Install the SDK
The recommended way to use the AWS SDK for Java in your project is to consume it from Maven. Import the aws-java-sdk-bom and specify the SDK Maven modules that your project needs in the dependencies.
Importing the BOM
<dependencyManagement>
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-bom</artifactId>
<version>1.11.63</version>
<type>pom</type>
<scope>import</scope>
</dependency>
</dependencies>
</dependencyManagement>
Using the SDK Maven modules
<dependencies>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-ec2</artifactId>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-s3</artifactId>
</dependency>
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-dynamodb</artifactId>
</dependency>
</dependencies>
See the Set up the AWS SDK for Java section of the developer guide for more information about installing the SDK through other means.
Features
Provides easy-to-use HTTP clients for all supported AWS services, regions, and authentication protocols.
Client-Side Data Encryption for Amazon S3 - Helps improve the security of storing application data in Amazon S3.
Amazon DynamoDB Object Mapper - Uses Plain Old Java Object (POJOs) to store and retrieve Amazon DynamoDB data.
Amazon S3 Transfer Manager - With a simple API, achieve enhanced the throughput, performance, and reliability by using multi-threaded Amazon S3 multipart calls.
Amazon SQS Client-Side Buffering - Collect and send SQS requests in asynchronous batches, improving application and network performance.
Automatically uses IAM Instance Profile Credentials on configured Amazon EC2 instances.
And more!
Building From Source
Once you check out the code from GitHub, you can build it using Maven. To disable the GPG-signing in the build, use:
mvn clean install -Dgpg.skip=true
Supported Versions
1.11.x - Recommended.
1.10.x - Approved. Only major critical bugs will be fixed. To get the new features, upgrade to 1.11.x version of the SDK.
https://github.com/helloworldtang/aws-sdk-java
/*
* Copyright 2010-2016 Amazon.com, Inc. or its affiliates. All Rights Reserved.
*
* Licensed under the Apache License, Version 2.0 (the "License").
* You may not use this file except in compliance with the License.
* A copy of the License is located at
*
* http://aws.amazon.com/apache2.0
*
* or in the "license" file accompanying this file. This file is distributed
* on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either
* express or implied. See the License for the specific language governing
* permissions and limitations under the License.
*/
import java.io.BufferedReader;
import java.io.File;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStreamWriter;
import java.io.Writer;
import java.util.UUID; import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.auth.AWSCredentials;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.regions.Region;
import com.amazonaws.regions.Regions;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.Bucket;
import com.amazonaws.services.s3.model.GetObjectRequest;
import com.amazonaws.services.s3.model.ListObjectsRequest;
import com.amazonaws.services.s3.model.ObjectListing;
import com.amazonaws.services.s3.model.PutObjectRequest;
import com.amazonaws.services.s3.model.S3Object;
import com.amazonaws.services.s3.model.S3ObjectSummary; /**
* This sample demonstrates how to make basic requests to Amazon S3 using the
* AWS SDK for Java.
* <p>
* <b>Prerequisites:</b> You must have a valid Amazon Web Services developer
* account, and be signed up to use Amazon S3. For more information on Amazon
* S3, see http://aws.amazon.com/s3.
* <p>
* Fill in your AWS access credentials in the provided credentials file
* template, and be sure to move the file to the default location
* (~/.aws/credentials) where the sample code will load the credentials from.
* <p>
* <b>WARNING:</b> To avoid accidental leakage of your credentials, DO NOT keep
* the credentials file in your source directory.
*
* http://aws.amazon.com/security-credentials
*/
public class S3Sample { public static void main(String[] args) throws IOException { /*
* The ProfileCredentialsProvider will return your [default]
* credential profile by reading from the credentials file located at
* (~/.aws/credentials).
*/
AWSCredentials credentials = null;
try {
credentials = new ProfileCredentialsProvider().getCredentials();
} catch (Exception e) {
throw new AmazonClientException(
"Cannot load the credentials from the credential profiles file. " +
"Please make sure that your credentials file is at the correct " +
"location (~/.aws/credentials), and is in valid format.",
e);
} AmazonS3 s3 = new AmazonS3Client(credentials);
Region usWest2 = Region.getRegion(Regions.US_WEST_2);
s3.setRegion(usWest2); String bucketName = "my-first-s3-bucket-" + UUID.randomUUID();
String key = "MyObjectKey";//key可以以目录的形式出现a/b,则会在a目录下创建b文件 System.out.println("===========================================");
System.out.println("Getting Started with Amazon S3");
System.out.println("===========================================\n"); try {
/*
* Create a new S3 bucket - Amazon S3 bucket names are globally unique,
* so once a bucket name has been taken by any user, you can't create
* another bucket with that same name.
*
* You can optionally specify a location for your bucket if you want to
* keep your data closer to your applications or users.
*/
System.out.println("Creating bucket " + bucketName + "\n");
s3.createBucket(bucketName); /*
* List the buckets in your account
*/
System.out.println("Listing buckets");
for (Bucket bucket : s3.listBuckets()) {
System.out.println(" - " + bucket.getName());
}
System.out.println(); /*
* Upload an object to your bucket - You can easily upload a file to
* S3, or upload directly an InputStream if you know the length of
* the data in the stream. You can also specify your own metadata
* when uploading to S3, which allows you set a variety of options
* like content-type and content-encoding, plus additional metadata
* specific to your applications.
*/
System.out.println("Uploading a new object to S3 from a file\n");
s3.putObject(new PutObjectRequest(bucketName, key, createSampleFile())); /*
* Download an object - When you download an object, you get all of
* the object's metadata and a stream from which to read the contents.
* It's important to read the contents of the stream as quickly as
* possibly since the data is streamed directly from Amazon S3 and your
* network connection will remain open until you read all the data or
* close the input stream.
*
* GetObjectRequest also supports several other options, including
* conditional downloading of objects based on modification times,
* ETags, and selectively downloading a range of an object.
*/
System.out.println("Downloading an object");
S3Object object = s3.getObject(new GetObjectRequest(bucketName, key));
System.out.println("Content-Type: " + object.getObjectMetadata().getContentType());
displayTextInputStream(object.getObjectContent()); /*
* List objects in your bucket by prefix - There are many options for
* listing the objects in your bucket. Keep in mind that buckets with
* many objects might truncate their results when listing their objects,
* so be sure to check if the returned object listing is truncated, and
* use the AmazonS3.listNextBatchOfObjects(...) operation to retrieve
* additional results.
*/
System.out.println("Listing objects");
ObjectListing objectListing = s3.listObjects(new ListObjectsRequest()
.withBucketName(bucketName)
.withPrefix("My"));
for (S3ObjectSummary objectSummary : objectListing.getObjectSummaries()) {
System.out.println(" - " + objectSummary.getKey() + " " +
"(size = " + objectSummary.getSize() + ")");
}
System.out.println(); /*
* Delete an object - Unless versioning has been turned on for your bucket,
* there is no way to undelete an object, so use caution when deleting objects.
*/
System.out.println("Deleting an object\n");
s3.deleteObject(bucketName, key); /*
* Delete a bucket - A bucket must be completely empty before it can be
* deleted, so remember to delete any objects from your buckets before
* you try to delete them.
*/
System.out.println("Deleting bucket " + bucketName + "\n");
s3.deleteBucket(bucketName);//非空bucketName不能删除
} catch (AmazonServiceException ase) {
System.out.println("Caught an AmazonServiceException, which means your request made it "
+ "to Amazon S3, but was rejected with an error response for some reason.");
System.out.println("Error Message: " + ase.getMessage());
System.out.println("HTTP Status Code: " + ase.getStatusCode());
System.out.println("AWS Error Code: " + ase.getErrorCode());
System.out.println("Error Type: " + ase.getErrorType());
System.out.println("Request ID: " + ase.getRequestId());
} catch (AmazonClientException ace) {
System.out.println("Caught an AmazonClientException, which means the client encountered "
+ "a serious internal problem while trying to communicate with S3, "
+ "such as not being able to access the network.");
System.out.println("Error Message: " + ace.getMessage());
}
} /**
* Creates a temporary file with text data to demonstrate uploading a file
* to Amazon S3
*
* @return A newly created temporary file with text data.
*
* @throws IOException
*/
private static File createSampleFile() throws IOException {
File file = File.createTempFile("aws-java-sdk-", ".txt");
file.deleteOnExit(); Writer writer = new OutputStreamWriter(new FileOutputStream(file));
writer.write("abcdefghijklmnopqrstuvwxyz\n");
writer.write("01234567890112345678901234\n");
writer.write("!@#$%^&*()-=[]{};':',.<>/?\n");
writer.write("01234567890112345678901234\n");
writer.write("abcdefghijklmnopqrstuvwxyz\n");
writer.close(); return file;
} /**
* Displays the contents of the specified input stream as text.
*
* @param input
* The input stream to display as text.
*
* @throws IOException
*/
private static void displayTextInputStream(InputStream input) throws IOException {
BufferedReader reader = new BufferedReader(new InputStreamReader(input));
while (true) {
String line = reader.readLine();
if (line == null) break; System.out.println(" " + line);
}
System.out.println();
} }
https://github.com/aws/aws-sdk-java/blob/master/src/samples/AmazonS3/S3Sample.java
Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: The bucket you tried to delete is not empty (Service: Amazon S3; Status Code: 409; Error Code: BucketNotEmpty; Request ID: EFBEEBCD746B48C3), S3 Extended Request ID: PPZs8QJiKDUjcETOTr/501ymr1XYKSy+9Q4fxsma/cyo2TIcHZCSW1gZezXp461A9HvQ4zudx/Y=
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1545)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1183)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:964)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.doExecute(AmazonHttpClient.java:676)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeWithTimer(AmazonHttpClient.java:650)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.execute(AmazonHttpClient.java:633)
at com.amazonaws.http.AmazonHttpClient$RequestExecutor.access$300(AmazonHttpClient.java:601)
at com.amazonaws.http.AmazonHttpClient$RequestExecutionBuilderImpl.execute(AmazonHttpClient.java:583)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:447)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:4137)
at com.amazonaws.services.s3.AmazonS3Client.deleteBucket(AmazonS3Client.java:1534)
at com.amazonaws.services.s3.AmazonS3Client.deleteBucket(AmazonS3Client.java:1520)
获取可用的短连接:
s3Client.putObject(new PutObjectRequest("your-bucket", "some-path/some-key.jpg", new File("somePath/someKey.jpg")).withCannedAcl(CannedAccessControlList.PublicRead))
s3Client.getResourceUrl("your-bucket", "some-path/some-key.jpg");
http://stackoverflow.com/questions/10975475/amazon-s3-upload-file-and-get-url
/*
* Obtain the Content length of the Input stream for S3 header
*/
try {
InputStream is = event.getFile().getInputstream();
contentBytes = IOUtils.toByteArray(is);
} catch (IOException e) {
System.err.printf("Failed while reading bytes from %s", e.getMessage());
} Long contentLength = Long.valueOf(contentBytes.length); ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentLength(contentLength); /*
* Reobtain the tmp uploaded file as input stream
*/
InputStream inputStream = event.getFile().getInputstream(); /*
* Put the object in S3
*/
try { s3client.putObject(new PutObjectRequest(bucketName, keyName, inputStream, metadata)); } catch (AmazonServiceException ase) {
System.out.println("Error Message: " + ase.getMessage());
System.out.println("HTTP Status Code: " + ase.getStatusCode());
System.out.println("AWS Error Code: " + ase.getErrorCode());
System.out.println("Error Type: " + ase.getErrorType());
System.out.println("Request ID: " + ase.getRequestId());
} catch (AmazonClientException ace) {
System.out.println("Error Message: " + ace.getMessage());
} finally {
if (inputStream != null) {
inputStream.close();
}
}
http://stackoverflow.com/questions/8351886/amazons3-putobject-with-inputstream-length-example
CREATING A CONNECTION
This creates a connection so that you can interact with the server.
String accessKey = "insert your access key here!";
String secretKey = "insert your secret key here!"; AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
AmazonS3 conn = new AmazonS3Client(credentials);
conn.setEndpoint("objects.dreamhost.com");
accessKey 和secretKey的获取方式:
服务---安全、身份与合规 分组下的 IAM----用户----安全证书----创建访问密钥 To get your access key ID and secret access key
Open the IAM console.
In the navigation pane, choose Users.
Choose your IAM user name (not the check box).
Choose the Security Credentials tab and then choose Create Access Key.
To see your access key, choose Show User Security Credentials. Your credentials will look something like this:
Access Key ID: AKIAIOSFODNN7EXAMPLE
Secret Access Key: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Choose Download Credentials, and store the keys in a secure location. Your secret key will no longer be available through the AWS Management Console; you will have the only copy.
Keep it confidential in order to protect your account, and never email it.
Do not share it outside your organization, even if an inquiry appears to come from AWS or Amazon.com.
No one who legitimately represents Amazon will ever ask you for your secret key.
http://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSGettingStartedGuide/AWSCredentials.html 安全证书
https://console.aws.amazon.com/iam/home?#/users/zhao.zhibo?section=security_credentials 成功
这是 仅有的 一次查看或下载私有访问密钥的机会。以后您无法恢复它们。不过,您随时可以创建新的访问密钥。
CHANGE AN OBJECT’S ACL
This makes the object hello.txt to be publicly readable, and secret_plans.txt to be private.
conn.setObjectAcl(bucket.getName(), "hello.txt", CannedAccessControlList.PublicRead);
conn.setObjectAcl(bucket.getName(), "secret_plans.txt", CannedAccessControlList.Private);
//获取一个request
GeneratePresignedUrlRequest urlRequest = new GeneratePresignedUrlRequest(bucketName, key);
//生成公用的url
URL url = s3.generatePresignedUrl(urlRequest);
/**
* <p>
* Returns a pre-signed URL for accessing an Amazon S3 resource.
* </p>
* <p>
* Pre-signed URLs allow clients to form a URL for an Amazon S3 resource,
* and then sign it with the current AWS security credentials.
* The pre-signed URL
* can be shared to other users, allowing access to the resource without
* providing an account's AWS security credentials.
* </p>
* <p>
* Pre-signed URLs are useful in many situations where AWS security
* credentials aren't available from the client that needs to make the
* actual request to Amazon S3.
* </p>
* <p>
* For example, an application may need remote users to upload files to the
* application owner's Amazon S3 bucket, but doesn't need to ship the
* AWS security credentials with the application. A pre-signed URL
* to PUT an object into the owner's bucket can be generated from a remote
* location with the owner's AWS security credentials, then the pre-signed
URL can be passed to the end user's application to use.
* </p>
* <p>
* If you are generating presigned url for <a
* href="http://aws.amazon.com/kms/">AWS KMS</a>-encrypted objects, you need to
* specify the correct region of the bucket on your client and configure AWS
* Signature Version 4 for added security. For more information on how to do
* this, see
* http://docs.aws.amazon.com/AmazonS3/latest/dev/UsingAWSSDK.html#
* specify-signature-version
* </p>
*
* @param bucketName
* The name of the bucket containing the desired object.
* @param key
* The key in the specified bucket under which the desired object
* is stored.
* @param expiration
* The time at which the returned pre-signed URL will expire.
* @param method
* The HTTP method verb to use for this URL
*
* @return A pre-signed URL which expires at the specified time, and can be
* used to allow anyone to download the specified object from S3,
* without exposing the owner's AWS secret access key.
*
* @throws SdkClientException
* If there were any problems pre-signing the request for the
* specified S3 object.
*
* @see AmazonS3#generatePresignedUrl(String, String, Date)
* @see AmazonS3#generatePresignedUrl(GeneratePresignedUrlRequest)
*/
public URL generatePresignedUrl(String bucketName, String key, Date expiration, HttpMethod method)
throws SdkClientException;
https://aws.amazon.com/articles/3002109349624271
Upload an Object Using a Pre-Signed URL (AWS SDK for Java)
The following tasks guide you through using the Java classes to upload an object using a pre-signed URL.
import java.io.IOException;
import java.io.OutputStreamWriter;
import java.net.HttpURLConnection;
import java.net.URL; import com.amazonaws.AmazonClientException;
import com.amazonaws.AmazonServiceException;
import com.amazonaws.HttpMethod;
import com.amazonaws.auth.profile.ProfileCredentialsProvider;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3Client;
import com.amazonaws.services.s3.model.GeneratePresignedUrlRequest; public class GeneratePresignedUrlAndUploadObject {
private static String bucketName = "*** bucket name ***";
private static String objectKey = "*** object key ***"; public static void main(String[] args) throws IOException {
AmazonS3 s3client = new AmazonS3Client(new ProfileCredentialsProvider()); try {
System.out.println("Generating pre-signed URL.");
java.util.Date expiration = new java.util.Date();
long milliSeconds = expiration.getTime();
milliSeconds += 1000 * 60 * 60; // Add 1 hour.
expiration.setTime(milliSeconds); GeneratePresignedUrlRequest generatePresignedUrlRequest =
new GeneratePresignedUrlRequest(bucketName, objectKey);
generatePresignedUrlRequest.setMethod(HttpMethod.PUT);
generatePresignedUrlRequest.setExpiration(expiration); URL url = s3client.generatePresignedUrl(generatePresignedUrlRequest); UploadObject(url); System.out.println("Pre-Signed URL = " + url.toString());
} catch (AmazonServiceException exception) {
System.out.println("Caught an AmazonServiceException, " +
"which means your request made it " +
"to Amazon S3, but was rejected with an error response " +
"for some reason.");
System.out.println("Error Message: " + exception.getMessage());
System.out.println("HTTP Code: " + exception.getStatusCode());
System.out.println("AWS Error Code:" + exception.getErrorCode());
System.out.println("Error Type: " + exception.getErrorType());
System.out.println("Request ID: " + exception.getRequestId());
} catch (AmazonClientException ace) {
System.out.println("Caught an AmazonClientException, " +
"which means the client encountered " +
"an internal error while trying to communicate" +
" with S3, " +
"such as not being able to access the network.");
System.out.println("Error Message: " + ace.getMessage());
}
} public static void UploadObject(URL url) throws IOException
{
HttpURLConnection connection=(HttpURLConnection) url.openConnection();
connection.setDoOutput(true);
connection.setRequestMethod("PUT");
OutputStreamWriter out = new OutputStreamWriter(
connection.getOutputStream());
out.write("This text uploaded as object.");
out.close();
int responseCode = connection.getResponseCode();
System.out.println("Service returned response code " + responseCode); }
}
http://docs.aws.amazon.com/AmazonS3/latest/dev/PresignedUrlUploadObjectJavaSDK.html
设置Amazon AWS S3上object的权限
在创建资源时设置 ACL
String bucketName = "bucket-name";
String keyName = "object-key";
String uploadFileName = "file-name"; AmazonS3 s3client = new AmazonS3Client(new ProfileCredentialsProvider()); AccessControlList acl = new AccessControlList();
acl.grantPermission(new CanonicalGrantee("d25639fbe9c19cd30a4c0f43fbf00e2d3f96400a9aa8dabfbbebe1906Example"), Permission.ReadAcp);
acl.grantPermission(GroupGrantee.AllUsers, Permission.Read);
acl.grantPermission(new EmailAddressGrantee("user@email.com"), Permission.WriteAcp); File file = new File(uploadFileName);
s3client.putObject(new PutObjectRequest(bucketName, keyName, file).withAccessControlList(acl));
在请求中创建一个存储桶并指定一个 LogDeliveryWrite
标准 ACL,以将写入权限授予 Amazon S3 LogDelivery
组。
String bucketName = "bucket-name";
AmazonS3 s3client = new AmazonS3Client(new ProfileCredentialsProvider()); s3client.createBucket(new CreateBucketRequest (bucketName).withCannedAcl(CannedAccessControlList.LogDeliveryWrite));
更新现有资源上的 ACL
String bucketName = "bucket-name";
String keyName = "object-key"; AmazonS3 s3client = new AmazonS3Client(new ProfileCredentialsProvider()); AccessControlList acl = new AccessControlList();
acl.grantPermission(new CanonicalGrantee("d25639fbe9c19cd30a4c0f43fbf00e2d3f96400a9aa8dabfbbebe1906Example"), Permission.ReadAcp);
acl.grantPermission(GroupGrantee.AuthenticatedUsers, Permission.Read);
acl.grantPermission(new EmailAddressGrantee("user@email.com"), Permission.WriteAcp);
Owner owner = new Owner();
owner.setId("852b113e7a2f25102679df27bb0ae12b3f85be6f290b936c4393484beExample");
owner.setDisplayName("display-name");
acl.setOwner(owner); s3client.setObjectAcl(bucketName, keyName, acl);
http://docs.amazonaws.cn/AmazonS3/latest/dev/acl-using-java-sdk.html
s3.setObjectAcl(bucketName, key, CannedAccessControlList.PublicRead);
http://docs.ceph.com/docs/master/radosgw/s3/java/#creating-a-connection
http://abc08010051.iteye.com/blog/2082956
http://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/index.html
Amazon AWS S3 操作手册的更多相关文章
- Amazon aws s3 加速
aws s3加速 - 工长山的专栏 - CSDN博客https://blog.csdn.net/xuanwu_yan/article/details/79160034 [实测有效]“解决国内访问s3. ...
- 跨域请求配置 Amazon AWS S3 腾讯云 阿里云 COS OSS 文件桶解决方案以及推荐 Lebal:Research
跨域请求配置 跨域请求指的就是不同的域名和端口之间的访问.由于 ajax 的同源策略影响.跨域请求默认是不被允许的. 使用@font-face外挂字体时,可能遇到跨域请求CROS问题:F12控制台报错 ...
- Python使用boto3操作AWS S3中踩过的坑
最近在AWS上开发部署应用. 看了这篇关于AWS中国区填坑的文章,结合自己使用AWS的经历,补充两个我自己填的坑. http://www.jianshu.com/p/0d0fd39a40c9?utm_ ...
- storj白皮书v3最全面解读,Docker创始人的加入能否扳倒AWS S3
Storj新发了白皮书v3,地址是:https://storj.io/storjv3.pdf. 这次白皮书一共有90页,看完还真要费不少时间.如果你没有时间看,可以看一下我这篇快速技术解读. 上次St ...
- Node开发文件上传系统及向七牛云存储和亚马逊AWS S3的文件上传
背景起,有奏乐: 有伟人曰:学习技能的最好途径莫过于理论与实践相结合. 初学Node这货时,每每读教程必会Fall asleep. 当真要开发系统时,顿觉精神百倍,即便踩坑无数也不失斗志. 因为同团队 ...
- [AWS] S3 Bucket
云存储服务 2.1 为网站打开属性 属性和权限设置 设置bucket属性,打开功能:Static website hosting(静态网站托管) 设置bucket权限,Permissions ---- ...
- AWS S3 CLI的权限bug
使用AWS CLI在S3上创建了一个bucket,上传文件的时候报以下错误: A client error (AccessDenied) occurred when calling the Creat ...
- AWS S3使用小结
使用场景一:储存网站的图片,并能被任何人访问 1. 创建一个bucket,名字与需要绑定的域名一致. 例如,根域名是mysite.com,希望把所有图片放在pic.mysite.com下面,访问的时候 ...
- AWS s3 python sdk code examples
Yet another easy-to-understand, easy-to-use aws s3 python sdk code examples. github地址:https://github ...
随机推荐
- koa-connect源码解析
文中提到的koa均为koa2 提到nodejs, 想必大家都知道express和koa. express: 大 koa: 小 比较的的是功能, 社区, 中间件,相关资源等 这里我就专门说说中间件吧, ...
- LG1955 [NOI2015]程序自动分析
题意 题目描述 在实现程序自动分析的过程中,常常需要判定一些约束条件是否能被同时满足. 考虑一个约束满足问题的简化版本:假设x1,x2,x3...代表程序中出现的变量,给定n个形如xi=xj或xi≠x ...
- ansible copy file
ansible xxxip -m copy -a 'src=/localdir/file dest=/sss/xxx/basic_search/bin/'
- div+css 怎么让一个小div在另一个大div里面 垂直居中
div+css 怎么让一个小div在另一个大div里面 垂直居中 方法1: 1 2 3 4 5 6 7 8 9 10 11 12 13 14 .parent { width:800 ...
- 防范SQL注入漏洞攻击
原理:通过拼sql语句,在输入框里输入' ; SHOW TABLES;注入这样的代码, 防范:你把全部的特殊符号都过滤掉(如单引号,双引号),自然就不会被注入 使用mysql_real_escape_ ...
- zmediaboard-Hi3518参数及配置
1.12_13.uboot的烧写和flash分区1_21.12.1.裸机烧录uboot(1)什么叫裸机烧录?设备是空白的,未经烧录的,就叫裸机.(2)裸机烧录一个设备有2种方案:1是用外部烧录器来烧录 ...
- Entity创建一对一关系
Area类 public virtual User User { get; set; } User类 public virtual Area Area { get; set; } Context类 m ...
- POJ3666序列最小差值
题目:http://poj.org/problem?id=3666 dp方程可以是 d [ i ] [ j ] = min ( d [ i - 1 ] [ k ] ) + abs ( a [ i ] ...
- Mybatis连接Oracle实现增删改查实践
1. 首先要在项目中增加Mybatis和Oracle的Jar文件 这里我使用的版本为ojdbc7 Mybatis版本为:3.2.4 2. 在Oracle中创建User表 create table T_ ...
- Eclipse/MyEclipse连接Hadoop集群出现:Unable to ... ... org.apache.hadoop.security.AccessControlExceptiom:Permission denied问题
问题详细如下: 解决办法: <property> <name>dfs.premissions</name> <value>false</value ...