最近公司准备接ceph储存,研究了一番,准备用亚马逊的s3接口实现,实现类如下:

  1. /**
  2. * Title: S3Manager
  3. * Description: Ceph储存的s3接口实现,参考文档:
  4. * https://docs.aws.amazon.com/zh_cn/AmazonS3/latest/dev/RetrievingObjectUsingJava.html
  5. * http://docs.ceph.org.cn/radosgw/s3/
  6. * author: xu jun
  7. * date: 2018/10/22
  8. */
  9. @Slf4j
  10. @Service
  11. public class S3Manager extends StorageManagerBase implements StorageManager {
  12. private final UKID ukid;
  13. private final S3ClientConfig s3ClientConfig;
  14. private final RedisManage redisManage;
  15. private AmazonS3 amazonClient;
  16.  
  17. @Autowired
  18. public S3Manager(UKID ukid, S3ClientConfig s3ClientConfig, RedisManage redisManage) {
  19. this.ukid = ukid;
  20. this.s3ClientConfig = s3ClientConfig;
  21. this.redisManage = redisManage;
  22. }
  23.  
  24. private AmazonS3 getAmazonClient() {
  25. if (amazonClient == null) {
  26. String accessKey = s3ClientConfig.getAccessKey();
  27. String secretKey = s3ClientConfig.getSecretKey();
  28. String endpoint = s3ClientConfig.getEndPoint();
  29.  
  30. AWSCredentials credentials = new BasicAWSCredentials(accessKey, secretKey);
  31. ClientConfiguration clientConfig = new ClientConfiguration();
  32. clientConfig.setProtocol(Protocol.HTTP);
  33.  
  34. AmazonS3 conn = AmazonS3ClientBuilder.standard()
  35. .withClientConfiguration(clientConfig)
  36. .withCredentials(new AWSStaticCredentialsProvider(credentials))
  37. .withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(endpoint, ""))
  38. .withPathStyleAccessEnabled(true)
  39. .build();
  40.  
  41. //检查储存空间是否创建
  42. checkBucket(conn);
  43. amazonClient = conn;
  44. }
  45. return amazonClient;
  46. }
  47.  
  48. @Override
  49. public String uploadFile(byte[] fileData, String extension) {
  50. log.info("Storage s3 api, upload file start");
  51.  
  52. //生成上传文件的随机序号
  53. long fileId = ukid.getGeneratorID();
  54. String fileName = Long.toString(fileId);
  55. //储存空间名
  56. String bucketName = s3ClientConfig.getBucketName();
  57. AmazonS3 conn = getAmazonClient();
  58.  
  59. PutObjectResult result = conn.putObject(bucketName, fileName, new ByteArrayInputStream(fileData), null);
  60. log.info("Storage s3 api, put object result :{}", result);
  61.  
  62. log.info("Storage s3 api, upload file end, file name:" + fileName);
  63. return fileName;
  64. }
  65.  
  66. @Override
  67. public String uploadAppenderFile(byte[] fileData, String extension) {
  68. log.info("Storage s3 api, upload appender file start");
  69.  
  70. //生成上传文件的随机序号
  71. long ukId = ukid.getGeneratorID();
  72. String fileName = Long.toString(ukId);
  73. //储存空间名
  74. String bucketName = s3ClientConfig.getBucketName();
  75. AmazonS3 conn = getAmazonClient();
  76. List<PartETag> partETags = new ArrayList<>();
  77. //初始化分片上传
  78. InitiateMultipartUploadRequest initRequest = new InitiateMultipartUploadRequest(bucketName, fileName);
  79. InitiateMultipartUploadResult initResponse = conn.initiateMultipartUpload(initRequest);
  80. String uploadId = initResponse.getUploadId();
  81.  
  82. ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(fileData);
  83. Integer contentLength = fileData.length;
  84. // 文件上传
  85. UploadPartRequest uploadPartRequest = new UploadPartRequest()
  86. .withBucketName(bucketName)
  87. .withKey(fileName)
  88. .withUploadId(uploadId)
  89. .withPartNumber()
  90. .withPartSize(contentLength)
  91. .withInputStream(byteArrayInputStream);
  92. UploadPartResult uploadPartResult = conn.uploadPart(uploadPartRequest);
  93.  
  94. try {
  95. byteArrayInputStream.close();
  96. } catch (IOException e) {
  97. throw FileCenterExceptionConstants.INTERNAL_IO_EXCEPTION;
  98. }
  99. partETags.add(uploadPartResult.getPartETag());
  100. Integer partNumber = uploadPartResult.getPartNumber();
  101.  
  102. S3CacheMode cacheMode = new S3CacheMode();
  103. cacheMode.setPartETags(partETags);
  104. cacheMode.setPartNumber(partNumber);
  105. cacheMode.setUploadId(uploadId);
  106. redisManage.set(fileName, cacheMode);
  107.  
  108. log.info("Storage s3 api, upload appender file end, fileName: {}", fileName);
  109. return fileName;
  110. }
  111.  
  112. @Override
  113. public void uploadChunkFile(ChunkFileSaveParams chunkFileSaveParams) {
  114. log.info("Storage s3 api, upload chunk file start");
  115.  
  116. String fileName = chunkFileSaveParams.getFileAddress();
  117. Result result = redisManage.get(fileName);
  118. JSONObject jsonObject = (JSONObject) result.getData();
  119. if (jsonObject == null) {
  120. throw FileCenterExceptionConstants.CACHE_DATA_NOT_EXIST;
  121. }
  122. S3CacheMode cacheMode = jsonObject.toJavaObject(S3CacheMode.class);
  123. Integer partNumber = cacheMode.partNumber;
  124. String uploadId = cacheMode.getUploadId();
  125. List<PartETag> partETags = cacheMode.partETags;
  126.  
  127. //储存空间名
  128. String bucketName = s3ClientConfig.getBucketName();
  129. AmazonS3 conn = getAmazonClient();
  130. ByteArrayInputStream byteArrayInputStream = new ByteArrayInputStream(chunkFileSaveParams.getBytes());
  131. Integer contentLength = chunkFileSaveParams.getBytes().length;
  132.  
  133. UploadPartRequest uploadPartRequest = new UploadPartRequest()
  134. .withBucketName(bucketName)
  135. .withKey(fileName)
  136. .withUploadId(uploadId)
  137. .withPartNumber(partNumber + )
  138. .withPartSize(contentLength)
  139. .withInputStream(byteArrayInputStream);
  140.  
  141. UploadPartResult uploadPartResult = conn.uploadPart(uploadPartRequest);
  142. partETags.add(uploadPartResult.getPartETag());
  143. partNumber = uploadPartResult.getPartNumber();
  144.  
  145. try {
  146. byteArrayInputStream.close();
  147. } catch (IOException e) {
  148. throw FileCenterExceptionConstants.INTERNAL_IO_EXCEPTION;
  149. }
  150.  
  151. S3CacheMode cacheModeUpdate = new S3CacheMode();
  152. cacheModeUpdate.setPartETags(partETags);
  153. cacheModeUpdate.setPartNumber(partNumber);
  154. cacheModeUpdate.setUploadId(uploadId);
  155. redisManage.set(fileName, cacheModeUpdate);
  156.  
  157. if (chunkFileSaveParams.getChunk().equals(chunkFileSaveParams.getChunks() - )) {
  158. //完成分片上传,生成储存对象
  159. CompleteMultipartUploadRequest compRequest = new CompleteMultipartUploadRequest(bucketName, fileName,
  160. uploadId, partETags);
  161. conn.completeMultipartUpload(compRequest);
  162. }
  163.  
  164. log.info("Storage s3 api, upload chunk file end");
  165. }
  166.  
  167. @Override
  168. public byte[] downloadFile(String fileName) {
  169. log.info("Storage s3 api, download file start");
  170. //储存空间名
  171. String bucketName = s3ClientConfig.getBucketName();
  172. AmazonS3 conn = getAmazonClient();
  173. S3Object object;
  174. if (conn.doesObjectExist(bucketName, fileName)) {
  175. object = conn.getObject(bucketName, fileName);
  176. } else {
  177. throw FileCenterExceptionConstants.OBJECT_NOT_EXIST;
  178. }
  179. log.debug("Storage s3 api, get object result :{}", object);
  180.  
  181. byte[] fileByte;
  182. InputStream inputStream;
  183. inputStream = object.getObjectContent();
  184. try {
  185. fileByte = IOUtils.toByteArray(inputStream);
  186. inputStream.close();
  187. } catch (IOException e) {
  188. throw FileCenterExceptionConstants.INTERNAL_IO_EXCEPTION;
  189. } finally {
  190. if (inputStream != null) {
  191. try {
  192. inputStream.close();
  193. } catch (IOException e) {
  194. log.error(e.getMessage());
  195. }
  196. }
  197. }
  198. log.info("Storage s3 api, download file end");
  199. return fileByte;
  200. }
  201.  
  202. @Override
  203. public byte[] downloadFile(String fileName, long fileOffset, long fileSize) {
  204. log.info("Storage s3 api, download file by block start");
  205. //储存空间名
  206. String bucketName = s3ClientConfig.getBucketName();
  207. AmazonS3 conn = getAmazonClient();
  208. S3Object object;
  209. if (conn.doesObjectExist(bucketName, fileName)) {
  210. GetObjectRequest getObjectRequest = new GetObjectRequest(bucketName, fileName)
  211. .withRange(fileOffset, fileOffset + fileSize);
  212. //范围下载。
  213. object = conn.getObject(getObjectRequest);
  214. } else {
  215. throw FileCenterExceptionConstants.OBJECT_NOT_EXIST;
  216. }
  217. log.info("Storage s3 api, get object result :{}", object);
  218.  
  219. // 读取数据。
  220. byte[] buf;
  221. InputStream in = object.getObjectContent();
  222. try {
  223. buf = inputToByte(in, (int) fileSize);
  224. } catch (IOException e) {
  225. throw FileCenterExceptionConstants.INTERNAL_IO_EXCEPTION;
  226. } finally {
  227. try {
  228. in.close();
  229. } catch (IOException e) {
  230. log.error(e.getMessage());
  231. }
  232. }
  233. log.info("Storage s3 api, download file by block end");
  234. return buf;
  235. }
  236.  
  237. @Override
  238. public String fileSecret(String filePath) {
  239. return null;
  240. }
  241.  
  242. @Override
  243. public String fileDecrypt(String filePath) {
  244. return null;
  245. }
  246.  
  247. @Override
  248. public String getDomain() {
  249. return null;
  250. }
  251.  
  252. /**
  253. * 检查储存空间是否已创建
  254. *
  255. * @param conn 客户端连接
  256. */
  257. private void checkBucket(AmazonS3 conn) {
  258. //储存空间名
  259. String bucketName = s3ClientConfig.getBucketName();
  260. if (conn.doesBucketExist(bucketName)) {
  261. log.debug("Storage s3 api, bucketName is found: " + bucketName);
  262. } else {
  263. log.warn("Storage s3 api, bucketName is not exist, create it: " + bucketName);
  264. conn.createBucket(bucketName);
  265. }
  266. }
  267.  
  268. /**
  269. * inputStream转byte[]
  270. *
  271. * @param inStream 输入
  272. * @param fileSize 文件大小
  273. * @return 输出
  274. * @throws IOException 异常
  275. */
  276. private static byte[] inputToByte(InputStream inStream, int fileSize) throws IOException {
  277. ByteArrayOutputStream swapStream = new ByteArrayOutputStream();
  278. byte[] buff = new byte[fileSize];
  279. int rc;
  280. while ((rc = inStream.read(buff, , fileSize)) > ) {
  281. swapStream.write(buff, , rc);
  282. }
  283. return swapStream.toByteArray();
  284. }
  285.  
  286. /**
  287. * 调试用的方法,可以在控制台看到io的数据
  288. *
  289. * @param input 输入
  290. * @throws IOException 异常
  291. private static void displayTextInputStream(InputStream input) throws IOException {
  292. // Read the text input stream one line at a time and display each line.
  293. BufferedReader reader = new BufferedReader(new InputStreamReader(input));
  294. String line;
  295. while ((line = reader.readLine()) != null) {
  296. log.info(line);
  297. }
  298. }
  299. */
  300. }

业务接口要实现包括分片上传(支持断点续传)、分片下载等功能,上面类是底层类不包含业务逻辑。

maven依赖:

  1. <!-- ceph储存的接口 -->
  2. <dependency>
  3. <groupId>com.amazonaws</groupId>
  4. <artifactId>aws-java-sdk</artifactId>
  5. <version>1.11.</version>
  6. </dependency>

开发感受:

  1.ceph官网上提供的s3接口文档(java版),内容又少又旧,已经基本不能当做参考了。所以API和代码示例要去亚马逊官网上看(提供了中文版,好评)

  2.s3接口本身不提供文件追加储存的功能。所以在实现分片上传的时候,比较麻烦(不想fastDFS和OSS那么方便)

  3.分片上传默认最小限制是5M,要修改可以在服务器配置上做

  4.如果使用域名做端点的话,默认会把bucket的名字,作为子域名来访问(需要域名解析,所以不建议)。如果想作为路径来访问,需要在连接配置中指定。

ceph储存的S3接口实现(支持断点续传)的更多相关文章

  1. 基于LAMP php7.1搭建owncloud云盘与ceph对象存储S3借口整合案例

    ownCloud简介 是一个来自 KDE 社区开发的免费软件,提供私人的 Web 服务.当前主要功能包括文件管理(内建文件分享).音乐.日历.联系人等等,可在PC和服务器上运行. 简单来说就是一个基于 ...

  2. 使用COSBench工具对ceph s3接口进行压力测试--续

    之前写的使用COSBench工具对ceph s3接口进行压力测试是入门,在实际使用是,配置内容各不一样,下面列出 压力脚本是xml格式的,套用UserGuide文档说明,如下 有很多模板的例子,在co ...

  3. 使用COSBench工具对ceph s3接口进行压力测试

    一.COSBench安装 COSBench是Intel团队基于java开发,对云存储的测试工具,全称是Cloud object Storage Bench 吐槽下,貌似这套工具是intel上海团队开发 ...

  4. 分布式存储系统之Ceph集群访问接口启用

    前文我们使用ceph-deploy工具简单拉起了ceph底层存储集群RADOS,回顾请参考https://www.cnblogs.com/qiuhom-1874/p/16724473.html:今天我 ...

  5. FTP文件上传 支持断点续传 并 打印下载进度(二) —— 单线程实现

    这个就看代码,哈哈哈哈哈  需要用到的jar包是: <dependency> <groupId>commons-net</groupId> <artifact ...

  6. web大附件上传,支持断点续传

    一. 功能性需求与非功能性需求 要求操作便利,一次选择多个文件和文件夹进行上传:支持PC端全平台操作系统,Windows,Linux,Mac 支持文件和文件夹的批量下载,断点续传.刷新页面后继续传输. ...

  7. java大附件上传,支持断点续传

    一. 功能性需求与非功能性需求 要求操作便利,一次选择多个文件和文件夹进行上传:支持PC端全平台操作系统,Windows,Linux,Mac 支持文件和文件夹的批量下载,断点续传.刷新页面后继续传输. ...

  8. 【FTP】FTP文件上传下载-支持断点续传

    Jar包:apache的commons-net包: 支持断点续传 支持进度监控(有时出不来,搞不清原因) 相关知识点 编码格式: UTF-8等; 文件类型: 包括[BINARY_FILE_TYPE(常 ...

  9. 【SFTP】使用Jsch实现Sftp文件下载-支持断点续传和进程监控

    参考上篇文章: <[SFTP]使用Jsch实现Sftp文件下载-支持断点续传和进程监控>:http://www.cnblogs.com/ssslinppp/p/6248763.html  ...

随机推荐

  1. Python学习之旅(十三)

    Python基础知识(12):函数(Ⅲ) 高阶函数 1.map map()函数接收两个参数,一个是函数,一个是Iterable,map将传入的函数依次作用到序列的每个元素,并把结果作为新的Iterat ...

  2. Mybatis连接配置文件详解

    <?xml version="1.0" encoding="UTF-8" ?><!DOCTYPE configurationPUBLIC &q ...

  3. python语法_if判断

    age_of_princal = 56 guess_age = int(input("e guess a age:")) if guess_age == age_of_princa ...

  4. 清除 System.Web.Caching.Cache 以"xxx"开头的缓存

    public static void ClearStartCache(string keyStart) { List<string> cacheKeys = new List<str ...

  5. lambda用法

    1.lambda为匿名函数,即不用起函数名2.如果函数使用次数很少并且很简洁,一般可以考虑用lambda函数3.lambda可以简化代码的可读性4.lambda不能使用if for等复杂的语法 示例一 ...

  6. 探究Java中的锁

    一.锁的作用和比较 1.Lock接口及其类图 Lock接口:是Java提供的用来控制多个线程访问共享资源的方式. ReentrantLock:Lock的实现类,提供了可重入的加锁语义 ReadWrit ...

  7. Ext 修改内容之后 不做任何动作 再次修改时的数据是原来第一次修改前的数据

    转自  http://blog.csdn.net/jaune161/article/details/18220257 在项目开发中遇到这样一个问题,点击Grid中的一条记录并修改,修改完后保存并且刷新 ...

  8. mysql报错Ignoring the redo log due to missing MLOG_CHECKPOINT between

    mysql报错Ignoring the redo log due to missing MLOG_CHECKPOINT between mysql版本:5.7.19 系统版本:centos7.3 由于 ...

  9. 解决跨域问题-jsonp&cors

    跨域的原因 浏览器的同源策略 同源策略是浏览器上为安全性考虑实施的非常重要的安全策略. 指的是从一个域上加载的脚本不允许访问另外一个域的文档属性. 举个例子:比如一个恶意网站的页面通过iframe嵌入 ...

  10. mybatis 转义

    当我们需要通过xml格式处理sql语句时,经常会用到< ,<=,>,>=等符号,但是很容易引起xml格式的错误,这样会导致后台将xml字符串转换为xml文档时报错,从而导致程序 ...