[toc]
环境
IP | 主机名 |
---|---|
10.1.80.91 | s1 |
10.1.80.92 | s2 |
10.1.80.93 | s3 |
安装cfssl
cfssl版本 | 可执行文件存放目录 | 证书存放目录 |
---|---|---|
1.6.2 | /usr/local/bin | /tmp/certs |
1 | curl -L https://github.com/cloudflare/cfssl/releases/download/v1.6.2/cfssl_1.6.2_linux_amd64 -o /tmp/cfssl |
打印默认证书配置1
2cfssl print-defaults csr
cfssl print-defaults config
生成CA根证书
ca-config配置
需要注意的是,etcd-ca-config.json里面server的配置一定要加上”client auth”,因为APISIX新建集群时候,会默认做一次节点间健康检查,此时并没有像etcdctl那样提供client的证书和私钥,所以节点会用server的证书和私钥来进行通信,此时server证书就互为服务端和客户端了。
并且apisix和etcd集群通信时,etcd由于grpc-gateway的原因,也会把server证书当成客户端证书来验证。
profiles(ca证书不同配置的作用)里面的内容就是CA可以用来发挥作用的功能,一共预置了三种配置,分别对应Server、Peer和Client的证书密钥。
signing即签名证书,key encipherment即加密,server auth即服务器认证,client auth即客户端认证
1 | mkdir -p /tmp/certs |
ca-csr配置
hosts里写所有etcd节点的ip,所有etcd节点的ip和对应的域名
1 | cat > /tmp/certs/etcd-ca-csr.json <<EOF |
生成CA证书和CA证书的私钥
1 | cd /tmp/certs |
会生成以下文件
- etcd-ca.csr
- etcd-ca-key.pem
- etcd-ca.pem
etcd-ca-key.pem为CA的私钥,请妥善保管
etcd-ca.csr文件为证书请求文件,可以删除
生成Server和Peer证书
Server和Peer配置
Server用来服务端客户端通信的,Peer用来节点间通信的,它们共用同一套配置
1 | mkdir -p /tmp/certs |
生成Server和Peer证书
1 | cfssl gencert -ca=etcd-ca.pem -ca-key=etcd-ca-key.pem -config=etcd-ca-config.json -profile=server etcd-csr.json | cfssljson -bare etcd-server |
生成以下文件
- etcd-server.csr
- etcd-server-key.pem
- etcd-server.pem
- etcd-peer.csr
- etcd-peer-key.pem
- etcd-peer.pem
生成Client证书
Client配置
客户端证书不需要hosts字段,只需要CN字段设置为client
1 | mkdir -p /tmp/certs |
生成Client证书
1 | cfssl gencert -ca=etcd-ca.pem -ca-key=etcd-ca-key.pem -config=etcd-ca-config.json -profile=client etcd-client-csr.json | cfssljson -bare etcd-client |
生成以下文件
- etcd-client.csr
- etcd-client-key.pem
- etcd-client.pem
创建3节点集群
安装etcd
s91,s92,s93上安装etcd
1 | ETCD_VER=v3.5.4 |
创建证书存放目录
每个节点创建证书存放目录1
mkdir -p /opt/etcd3/ssl
将证书传到s91, s92, s93
方便起见,就把所有文件都传了1
2
3scp -r /tmp/certs/etcd-*.pem root@10.1.80.91:/opt/etcd3/ssl
scp -r /tmp/certs/etcd-*.pem root@10.1.80.92:/opt/etcd3/ssl
scp -r /tmp/certs/etcd-*.pem root@10.1.80.93:/opt/etcd3/ssl
frontend 运行etcd(不推荐)
运行etcd的模板
–initial-cluster-token etcd-cluster-tkn “etcd-cluster-tkn”可以替换成你自己需要的值
1 | ./etcd -name etcd1 \ |
分别在s91,s92,s93上运行etcd
节点1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23make sure etcd process has write access to this directory
remove this directory if the cluster is new; keep if restarting etcd
rm -rf /tmp/etcd
./etcd -name s1 \
--data-dir /tmp/etcd-data \
--auto-tls \
--client-cert-auth \
--cert-file=/opt/etcd3/ssl/etcd-server.pem \
--key-file=/opt/etcd3/ssl/etcd-server-key.pem \
--trusted-ca-file=/opt/etcd3/ssl/etcd-ca.pem \
--peer-auto-tls \
--peer-cert-file=/opt/etcd3/ssl/etcd-peer.pem \
--peer-key-file=/opt/etcd3/ssl/etcd-peer-key.pem \
--peer-client-cert-auth \
--peer-trusted-ca-file=/opt/etcd3/ssl/etcd-ca.pem \
--advertise-client-urls https://10.1.80.91:2379 \
--listen-client-urls https://10.1.80.91:2379 \
--listen-peer-urls https://10.1.80.91:2380 \
--initial-advertise-peer-urls https://10.1.80.91:2380 \
--initial-cluster-token etcd-cluster-tkn \
--initial-cluster "s1=https://10.1.80.91:2380,s2=https://10.1.80.92:2380,s3=https://10.1.80.93:2380" \
--initial-cluster-state new节点2
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23make sure etcd process has write access to this directory
remove this directory if the cluster is new; keep if restarting etcd
rm -rf /tmp/etcd
./etcd -name s2 \
--data-dir /tmp/etcd-data \
--auto-tls \
--client-cert-auth \
--cert-file=/opt/etcd3/ssl/etcd-server.pem \
--key-file=/opt/etcd3/ssl/etcd-server-key.pem \
--trusted-ca-file=/opt/etcd3/ssl/etcd-ca.pem \
--peer-auto-tls \
--peer-cert-file=/opt/etcd3/ssl/etcd-peer.pem \
--peer-key-file=/opt/etcd3/ssl/etcd-peer-key.pem \
--peer-client-cert-auth \
--peer-trusted-ca-file=/opt/etcd3/ssl/etcd-ca.pem \
--advertise-client-urls https://10.1.80.92:2379 \
--listen-client-urls https://10.1.80.92:2379 \
--listen-peer-urls https://10.1.80.92:2380 \
--initial-advertise-peer-urls https://10.1.80.92:2380 \
--initial-cluster-token etcd-cluster-tkn \
--initial-cluster "s1=https://10.1.80.91:2380,s2=https://10.1.80.92:2380,s3=https://10.1.80.93:2380" \
--initial-cluster-state new节点3
1 | make sure etcd process has write access to this directory |
检查etcd集群状态
1 | ETCDCTL_API=3 |
systemd运行方式(强烈推荐)
如果之前通过frontend方式运行过apisix,记得要停掉之前的apisix进程。
证书先拷贝到/opt/etcd3/ssl目录1
2
3
4
5 make sure etcd process has write access to this directory
remove this directory if the cluster is new; keep if restarting etcd
rm -rf /tmp/etcd/s1
to write service file for etcd
s1
1 | cat > /tmp/s1.service <<EOF |
1 | to start service |
s2
1 | cat > /tmp/s2.service <<EOF |
s3
1 | cat > /tmp/s3.service <<EOF |
Check status:
etcdctl检查1
2
3
4
5
6
7ETCDCTL_API=3
/opt/etcd3/etcdctl \
--cacert /opt/etcd3/ssl/etcd-ca.pem \
--cert /opt/etcd3/ssl/etcd-server.pem \
--key /opt/etcd3/ssl/etcd-server-key.pem \
--endpoints 10.1.80.91:2379,10.1.80.92:2379,10.1.80.93:2379 \
endpoint health
curl检查1
2 curl查看健康状态
curl --cacert /opt/etcd3/ssl/etcd-ca.pem --cert /opt/etcd3/ssl/etcd-server.pem --key /opt/etcd3/ssl/etcd-server-key.pem https://10.1.80.91:2379/health
注意切换到指定用户
若结果如下,表示集群状态正常1
2
310.1.80.93:2379 is healthy: successfully committed proposal: took = 6.506639ms
10.1.80.91:2379 is healthy: successfully committed proposal: took = 6.427877ms
10.1.80.92:2379 is healthy: successfully committed proposal: took = 7.161322ms