0%

ceph-mimic-13.2.5中s3的初步使用


前提准备

有一个HEALTH_OK的ceph集群,并且还有剩余的存储空间。

这里是我所搭建的集群:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ sudo ceph -s
cluster:
id: a20b153c-c907-41bb-a5b2-753a40e2085c
health: HEALTH_OK

services:
mon: 4 daemons, quorum node1,node2,node3,node4
mgr: node2(active), standbys: node3, node1, node4
osd: 4 osds: 4 up, 4 in
rgw: 1 daemon active

data:
pools: 6 pools, 48 pgs
objects: 198 objects, 3.2 KiB
usage: 4.1 GiB used, 46 GiB / 50 GiB avail
pgs: 48 active+clean

创建OBJECT GATEWAY

注意到OBJECT GATEWAY不需要是mon节点或者osd节点。

首先,要在节点上安装相关的包,可以使用官网推荐的ceph-deploy install --rgw <gateway-node1> [<gateway-node2> ...]来进行安装, 但是这里由于指定版本、epel包下载容易失败等问题,还是推荐直接使用本地yum源来进行ceph和ceph-radosgw的安装。

1
$ sudo yum install -y ceph-13.2.5 ceph-radosgw-13.2.5

第二步,将节点设为admin节点,我这里直接使用了集群中的node1节点:

1
[deploy]$ ceph-deploy admin node1

第三步,创建GATEWAY INSTANCE

1
[deploy]$ ceph-deploy rgw create node1

服务的默认端口为7480,查看当前打开端口,可以看到服务成功建立起来:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[node1]$ netstat -nlpt

Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 192.168.31.203:6789 0.0.0.0:* LISTEN -
tcp 0 0 192.168.31.203:6800 0.0.0.0:* LISTEN -
tcp 0 0 192.168.31.203:6801 0.0.0.0:* LISTEN -
tcp 0 0 192.168.31.203:6802 0.0.0.0:* LISTEN -
tcp 0 0 192.168.31.203:6803 0.0.0.0:* LISTEN -
tcp 0 0 192.168.31.203:6804 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:22 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:7480 0.0.0.0:* LISTEN -
tcp 0 0 127.0.0.1:25 0.0.0.0:* LISTEN -
tcp6 0 0 :::22 :::* LISTEN -
tcp6 0 0 ::1:25 :::* LISTEN -

查看当前pool,应该可以看到相关的pool被创建:

1
2
3
4
5
6
7
8
$ sudo ceph osd pool ls

default.rgw.meta
.rgw.root
default.rgw.control
default.rgw.log
default.rgw.buckets.index
default.rgw.buckets.data

创建RADOSGW USER

s3是一个Web服务接口,自然的,就需要相应的访问权限来与它交互,所以在使用它之前,需要创建用户。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[node1]$ sudo radosgw-admin user create --uid="testuser" --display-name="First User"

{
"user_id": "testuser",
"display_name": "First User",
"email": "",
"suspended": 0,
"max_buckets": 1000,
"subusers": [],
"keys": [{
"user": "testuser",
"access_key": "I0PJDPCIYZ665MW88W9R",
"secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA"
}],
"swift_keys": [],
"caps": [],
"op_mask": "read, write, delete",
"default_placement": "",
"placement_tags": [],
"bucket_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"user_quota": {
"enabled": false,
"max_size_kb": -1,
"max_objects": -1
},
"temp_url_keys": []
}

在这一长串的输出中,别的在现阶段不用管,记住access_keysecret_key就行。

想再查询access_keysecret_key,可以使用以下命令:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
// 查询有哪些用户
[node1]$ sudo radosgw-admin user list

[
"testuser"
]

// 查询用户信息
[node1]$ sudo radosgw-admin user info --uid=testuser

...
"keys": [{
"user": "testuser",
"access_key": "I0PJDPCIYZ665MW88W9R",
"secret_key": "dxaXZ8U90SXydYzyS5ivamEP20hkLSUViiaR+ZDA"
}],
...

使用S3 Browser(非必须)

使用S3 Browser来浏览存储,以证明现在服务运行正常,连接正常。

下载S3 Browser

运行S3 Browser,它会让你输入你的S3账户:

image

然后将上面的信息都填上,如下:

image

连接成功之后,就可以看到当前的bucket情况,也可以使用它进行创建、删除、上传以及下载等:

image

到目前为止,算是完成了ceph集群方面的准备工作,下面就是写代码方面了。


创建工程

我这里使用的是idea+java

首先创建maven工程(处理依赖会简单很多):

image

(注:这里略过了idea中maven配置的相关内容。)

修改pom.xml文件,添加依赖:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
<modelVersion>4.0.0</modelVersion>

<groupId>test</groupId>
<artifactId>test</artifactId>
<version>1.0-SNAPSHOT</version>

<dependencies>
<!-- https://mvnrepository.com/artifact/com.amazonaws/aws-java-sdk-s3 -->
<dependency>
<groupId>com.amazonaws</groupId>
<artifactId>aws-java-sdk-s3</artifactId>
<version>1.11.597</version>
</dependency>
</dependencies>

</project>

接下来最重要的就是参考官方给的代码样例了。

最后我写的类如下:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
package s3;

import com.amazonaws.auth.AWSStaticCredentialsProvider;
import com.amazonaws.auth.BasicAWSCredentials;
import com.amazonaws.client.builder.AwsClientBuilder;
import com.amazonaws.services.s3.AmazonS3;
import com.amazonaws.services.s3.AmazonS3ClientBuilder;
import com.amazonaws.services.s3.model.*;
import com.amazonaws.util.StringUtils;

import java.io.*;
import java.util.Iterator;
import java.util.List;

/**
* S3操作类,完成一些基本的操作
* @author long
* @version v0.1
*/
public class S3Template {

public static final String ORIGINAL_FOLDER = "video/original"; // 原始文件目录

public static final String DOWNLOAD_FOLDER = "video/download"; // 下载文件目录

public static final String TRANSCODING_FOLDER = "video/transcoding"; // 转码文件目录

private AmazonS3 s3Client = null;

private MyLogger log = null;

public S3Template(String endPoint, String accessKey, String secretKey) {
this.s3Client = AmazonS3ClientBuilder.standard()
.withCredentials(new AWSStaticCredentialsProvider(new BasicAWSCredentials(accessKey, secretKey)))
.withEndpointConfiguration(new AwsClientBuilder.EndpointConfiguration(endPoint, ""))
.build();
log = new MyLogger();
}

public void listBuckets() {
List<Bucket> buckets = s3Client.listBuckets();
for (Bucket bucket : buckets) {
log.info(bucket.getName() + "\t" +
StringUtils.fromDate(bucket.getCreationDate()));
}
}

public void createBucket(String bucketName) {
if (!s3Client.doesBucketExistV2(bucketName)) {
s3Client.createBucket(new CreateBucketRequest(bucketName));
}
// Verify that the bucket was created by retrieving it and checking its location.
String bucketLocation = s3Client.getBucketLocation(new GetBucketLocationRequest(bucketName));
log.info("Bucket location: " + bucketLocation);
}

public void deleteBucket(String bucketName) {
if (!s3Client.doesBucketExistV2(bucketName)) {
log.info("Bucket " + bucketName + " does not exist.");
return;
}
ObjectListing objectListing = s3Client.listObjects(bucketName);
while (true) {
Iterator<S3ObjectSummary> objIter = objectListing.getObjectSummaries().iterator();
while (objIter.hasNext()) {
s3Client.deleteObject(bucketName, objIter.next().getKey());
}
if (objectListing.isTruncated()) {
objectListing = s3Client.listNextBatchOfObjects(objectListing);
} else {
break;
}
}

VersionListing versionList = s3Client.listVersions(new ListVersionsRequest().withBucketName(bucketName));
while (true) {
Iterator<S3VersionSummary> versionIter = versionList.getVersionSummaries().iterator();
while (versionIter.hasNext()) {
S3VersionSummary vs = versionIter.next();
s3Client.deleteVersion(bucketName, vs.getKey(), vs.getVersionId());
}

if (versionList.isTruncated()) {
versionList = s3Client.listNextBatchOfVersions(versionList);
} else {
break;
}
}

s3Client.deleteBucket(bucketName);
}

public void downFile(String bucketName, String key) throws IOException {
log.info("Downloading object " + key);
S3Object fullObject = null;
fullObject = s3Client.getObject(new GetObjectRequest(bucketName, key));
storeFile(fullObject.getObjectContent(), key);
}

public void uploadFile(String bucketName, String fileObjKeyName, String fileName) {
if ( s3Client.doesObjectExist(bucketName, fileObjKeyName) ) {
log.info("The File " + fileName + " already exists.");
return;
}

// Upload a file as a new object with ContentType and title specified.
log.info("Upload object " + fileName);
PutObjectRequest request = new PutObjectRequest(bucketName, fileObjKeyName, new File(ORIGINAL_FOLDER + "/" + fileName));
ObjectMetadata metadata = new ObjectMetadata();
metadata.setContentType("plain/video");
metadata.addUserMetadata("x-amz-meta-title", "someTitle");
request.setMetadata(metadata);
s3Client.putObject(request);
}


public void storeFile(InputStream input, String fileName) throws IOException {
log.info("Store File " + fileName);
// Read the text input stream one line at a time and display each line.
BufferedInputStream reader = new BufferedInputStream(input);
BufferedOutputStream writer = new BufferedOutputStream(new FileOutputStream(DOWNLOAD_FOLDER + "/" + fileName));
byte[] buff = new byte[1024];
int len = 0;
while ((len = reader.read(buff)) != -1) {
writer.write(buff, 0, len);
}
reader.close();
writer.close();
}

}

成功上传文件到ceph并将其下载下来:

image

基本操作练习结束。


参考

INSTALL CEPH OBJECT GATEWAY

aws-doc-sdk-examples