-
Notifications
You must be signed in to change notification settings - Fork 0
/
index.xml
340 lines (321 loc) · 53.3 KB
/
index.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
<?xml-stylesheet href="/rss.xsl" type="text/xsl"?>
<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
<title>雪泥鸿爪</title>
<link>https://www.lix23.com/</link>
<description>Recent content on 雪泥鸿爪</description>
<generator>Hugo -- gohugo.io</generator>
<language>en-us</language>
<copyright>all copyright reserved for lixin [email protected] </copyright>
<lastBuildDate>Tue, 14 Nov 2023 23:13:41 +0800</lastBuildDate>
<atom:link href="https://www.lix23.com/index.xml" rel="self" type="application/rss+xml" />
<item>
<title>在云上自建Kubernetes集群</title>
<link>https://www.lix23.com/posts/new-k8s-cluster/</link>
<pubDate>Tue, 14 Nov 2023 23:13:41 +0800</pubDate>
<guid>https://www.lix23.com/posts/new-k8s-cluster/</guid>
<description>雪泥鸿爪 https://www.lix23.com/posts/new-k8s-cluster/ -<h2 id="安装背景">安装背景</h2>
<p>出于对于kubernetes集群的配置灵活性的需求,很多朋友有自建Kubernetes集群的诉求,在此记录一下我在阿里云上购买ECS并从头开始来完成kubernetes集群的过程;为节省成本集群中的节点采用了规格为2c4G的突发性能型ECS,操作系统是Anolis OS 8.8 RHCK 64位;使用kubeadm这个工具来进行安装,主要流程可抽象成两个大步骤:</p>
<ol>
<li>初始化节点:通过rpm包管理工具安装kubectl、kubelet、kubeadm 这几个组件;安装containerd,并完成相关配置;</li>
<li>初始化集群:注意kubeadm的配置文件,生成证书等配置,建立高可用控制面,并将worker节点加入集群中;</li>
</ol>
<h2 id="初始化节点">初始化节点</h2>
<h3 id="安装kubectlkubeadmkubelet-组件">安装kubectl、kubeadm、kubelet 组件</h3>
<p>考虑到网络联通性问题,需要加入rpm包的国内本地化配置源,首先是kubernetes的源:添加rpm源的配置如下所示,清注意关闭gpgcheck</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>cat <span style="color:#e6db74">&lt;&lt; EOF |tee /etc/yum.repos.d/kubernetes.repo
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">[kubernetes]
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">name=Kubernetes
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">enabled=1
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">gpgcheck=0
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">repo_gpgcheck=0
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">EOF</span>
</span></span></code></pre></div><div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span><span style="color:#75715e"># 建议设定版本,不声明版本则安装具体的最新版;</span>
</span></span><span style="display:flex;"><span>yum install kubectl kubeadm kubelet
</span></span></code></pre></div><h3 id="安装-containerd">安装 containerd</h3>
<p>contaienrd 配置</p>
<p>参考 <a href="https://kubernetes.io/zh-cn/docs/setup/production-environment/container-runtimes/#containerd">https://kubernetes.io/zh-cn/docs/setup/production-environment/container-runtimes/#containerd</a> 来进行containerd的安装</p>
<p>containerd 国内本地化源配置</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span> wget -O /etc/yum.repos.d/docker-ce.repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
</span></span></code></pre></div><p>先验证contianerd包在源中存在,然后执行如下命令安装相关 containerd 运行时:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span> yum install containerd.io
</span></span><span style="display:flex;"><span> systemctl enable containerd
</span></span><span style="display:flex;"><span> systemctl start containerd
</span></span></code></pre></div><h3 id="修改-containerd-中的-sandbox-image配置">修改 containerd 中的 sandbox image配置</h3>
<p>修改 containerd 配置文件 config.toml文件中的 sandbox image配置:registry.aliyuncs.com/google_containers/pause:3.9 ,此修改是因为默认的镜像地址为gcr,在国内无法正常访问。</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span><span style="color:#75715e"># 生成对应containerd的配置文件</span>
</span></span><span style="display:flex;"><span>containerd config default &gt; /etc/containerd/config.toml
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># 保存此配置文件后通过 命令重启生效配置</span>
</span></span><span style="display:flex;"><span>systemctl restart containerd
</span></span></code></pre></div><p>注意:</p>
<ul>
<li>查看containerd当前生效配置:containerd config dump &gt; 2.toml;</li>
<li>如果containerd的工作状态不符合预期,可以通过 journalctl -xeu containerd 命令来查看containerd日志 ;</li>
</ul>
<h3 id="添加-crictl配置">添加 crictl配置</h3>
<p>相关配置文件为 /etc/crictl.yaml</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>cat <span style="color:#e6db74">&lt;&lt; EOF |tee /etc/crictl.yaml
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"># these first two endpoint setting is where you configure crictl to containerd
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">runtime-endpoint: unix:///run/containerd/containerd.sock
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">image-endpoint: unix:///run/containerd/containerd.sock
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">timeout: 3
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">debug: true
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">EOF</span>
</span></span></code></pre></div><p>此时执行 crictl images 可以正常输出内容;</p>
<h3 id="安装-并配置-cni-插件">安装 并配置 cni 插件</h3>
<p>在 cni 界面 <a href="https://github.com/containernetworking/plugins/releases">https://github.com/containernetworking/plugins/releases</a> ,找到 v1.3.0 或者v1.0.0 版本,通过wget命令下载此包。</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span><span style="color:#75715e"># 通过命令创建cni的对应目录</span>
</span></span><span style="display:flex;"><span>mkdir -p /opt/cni/bin
</span></span><span style="display:flex;"><span><span style="color:#75715e"># 通过如下相关命令将包解压至相关目录</span>
</span></span><span style="display:flex;"><span>tar Cxzvf /opt/cni/bin cni-plugins-linux-amd64-v1.3.0.tgz
</span></span></code></pre></div><p>本地cni配置,注意版本号以及相关ip子网地址段:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>cat <span style="color:#e6db74">&lt;&lt; EOF | tee /etc/cni/net.d/10-containerd-net.conflist
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">{
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> &#34;cniVersion&#34;: &#34;1.0.0&#34;,
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> &#34;name&#34;: &#34;containerd-net&#34;,
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> &#34;plugins&#34;: [
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> {
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> &#34;type&#34;: &#34;bridge&#34;,
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> &#34;bridge&#34;: &#34;cni0&#34;,
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> &#34;isGateway&#34;: true,
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> &#34;ipMasq&#34;: true,
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> &#34;promiscMode&#34;: true,
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> &#34;ipam&#34;: {
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> &#34;type&#34;: &#34;host-local&#34;,
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> &#34;ranges&#34;: [
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> [{
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> &#34;subnet&#34;: &#34;10.14.0.0/24&#34;
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> }]
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> ],
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> &#34;routes&#34;: [
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> { &#34;dst&#34;: &#34;0.0.0.0/0&#34; },
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> { &#34;dst&#34;: &#34;::/0&#34; }
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> ]
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> }
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> },
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> {
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> &#34;type&#34;: &#34;portmap&#34;,
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> &#34;capabilities&#34;: {&#34;portMappings&#34;: true},
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> &#34;externalSetMarkChain&#34;: &#34;KUBE-MARK-MASQ&#34;
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> }
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74"> ]
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">}
</span></span></span><span style="display:flex;"><span><span style="color:#e6db74">EOF</span>
</span></span></code></pre></div><p>注意此时只解决了本节点上容器通信的问题,并未解决在节点间的容器通信问题,跨节点通信需要通过类似 flannel、calico 这样的方案来完成。</p>
<p>测试:通过crictl 成功run起来一个本地容器,参考crictl 相关文档;</p>
<h3 id="内核参数配置修改">内核参数配置修改</h3>
<p>内核参数设置(如果不做如下操作,将会无法通过 kubeadm init的 precheck )</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span><span style="color:#75715e"># 修改域名配置 </span>
</span></span><span style="display:flex;"><span>hostnamectl set-hostname k8s-1
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># 安装tc </span>
</span></span><span style="display:flex;"><span>yum install iproute-tc
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory 如果遇到相关错误,请执行 modprobe 命令</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> modprobe bridge
</span></span><span style="display:flex;"><span> modprobe br_netfilter
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> echo <span style="color:#e6db74">&#34;net.bridge.bridge-nf-call-iptables=1&#34;</span> | sudo tee -a /etc/sysctl.conf
</span></span><span style="display:flex;"><span> echo <span style="color:#e6db74">&#34;net.bridge.bridge-nf-call-iptables = 1&#34;</span> &gt;&gt; /etc/sysctl.conf
</span></span><span style="display:flex;"><span> echo <span style="color:#e6db74">&#34;net.bridge.bridge-nf-call-ip6tables = 1&#34;</span> &gt;&gt; /etc/sysctl.conf
</span></span><span style="display:flex;"><span> echo <span style="color:#e6db74">&#34;net.ipv4.ip_forward = 1&#34;</span> &gt;&gt; /etc/sysctl.conf
</span></span><span style="display:flex;"><span> echo <span style="color:#e6db74">&#34;vm.swappiness = 0&#34;</span> &gt;&gt; /etc/sysctl.conf
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>sysctl 使相关配置生效
</span></span><span style="display:flex;"><span>sysctl -p /etc/sysctl.conf
</span></span></code></pre></div><h3 id="集群初始化--拉起控制面">集群初始化&ndash;拉起控制面</h3>
<p>控制面(control plane)采用三台服务器,etcd与api server同机部署的架构;通过ssh登录一台预先规划好的master服务器,通过kubeadm 来完成节点初始化工作,注意给出config配置文件需要使用国内阿里云的镜像仓库;&ndash;apiserver-advertise-address 为此master服务器的内网地址;另外控制面的证书如果需要上传到集群请在 kubeadm init 执行时加上 &ndash;upload-certs 这个参数。
一般来说 kubeadm的init命令如下所示:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>kubeadm init <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> --apiserver-advertise-address<span style="color:#f92672">=</span>192.168.0.121 <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> --image-repository registry.aliyuncs.com/google<span style="color:#ae81ff">\_</span>containers <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> --kubernetes-version v1.28.2 <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> --service-cidr<span style="color:#f92672">=</span>10.1.0.0/16 <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> --pod-network-cidr<span style="color:#f92672">=</span>10.244.0.0/16
</span></span><span style="display:flex;"><span>
</span></span></code></pre></div><p>但是为了更多配置项,我们选择配置文件的方式来进行init,具体命令:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>kubeadm init --config kubeadm-config.yaml
</span></span></code></pre></div><p>kubeadm 配置文件内容如下,certSANs配置中将三台master的hostname及ip地址都写入,此外还加入了api server对应LB的域名及ip地址;注意此处采用的是将etcd实例与api server 实例部署在同一node上的架构,如果是生产环境部署建议将etcd集群拆分部署在其他服务器上,如果是etcd拆分部署的情况,kubeadm配置文件的内容还会有所不同,具体请参照附录中的文件链接:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-yaml" data-lang="yaml"><span style="display:flex;"><span><span style="color:#f92672">apiServer</span>:
</span></span><span style="display:flex;"><span> <span style="color:#f92672">certSANs</span>:
</span></span><span style="display:flex;"><span> - <span style="color:#ae81ff">api.k8s.local</span>
</span></span><span style="display:flex;"><span> - <span style="color:#ae81ff">api.lixin.com</span>
</span></span><span style="display:flex;"><span> - <span style="color:#ae81ff">master3105</span>
</span></span><span style="display:flex;"><span> - <span style="color:#ae81ff">master12</span>
</span></span><span style="display:flex;"><span> - <span style="color:#ae81ff">master079</span>
</span></span><span style="display:flex;"><span> - <span style="color:#ae81ff">10.0.3.105</span>
</span></span><span style="display:flex;"><span> - <span style="color:#ae81ff">10.0.1.2</span>
</span></span><span style="display:flex;"><span> - <span style="color:#ae81ff">10.0.0.79</span>
</span></span><span style="display:flex;"><span> - <span style="color:#ae81ff">10.0.4.212</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">extraArgs</span>:
</span></span><span style="display:flex;"><span> <span style="color:#f92672">authorization-mode</span>: <span style="color:#ae81ff">Node,RBAC</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">timeoutForControlPlane</span>: <span style="color:#ae81ff">4m0s</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">apiVersion</span>: <span style="color:#ae81ff">kubeadm.k8s.io/v1beta3</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">certificatesDir</span>: <span style="color:#ae81ff">/etc/kubernetes/pki</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">clusterName</span>: <span style="color:#ae81ff">kube-lixin</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">controllerManager</span>: {}
</span></span><span style="display:flex;"><span><span style="color:#f92672">dns</span>: {}
</span></span><span style="display:flex;"><span><span style="color:#f92672">etcd</span>:
</span></span><span style="display:flex;"><span> <span style="color:#f92672">local</span>:
</span></span><span style="display:flex;"><span> <span style="color:#f92672">dataDir</span>: <span style="color:#ae81ff">/var/lib/etcd</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">imageRepository</span>: <span style="color:#ae81ff">registry.aliyuncs.com/google_containers</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">kind</span>: <span style="color:#ae81ff">ClusterConfiguration</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">kubernetesVersion</span>: <span style="color:#ae81ff">v1.28.0</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">controlPlaneEndpoint</span>: <span style="color:#e6db74">&#34;10.0.4.212:6443&#34;</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">networking</span>:
</span></span><span style="display:flex;"><span> <span style="color:#f92672">dnsDomain</span>: <span style="color:#ae81ff">cluster.local</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">podSubnet</span>: <span style="color:#ae81ff">10.244.0.0</span><span style="color:#ae81ff">/16</span>
</span></span><span style="display:flex;"><span> <span style="color:#f92672">serviceSubnet</span>: <span style="color:#ae81ff">10.96.0.0</span><span style="color:#ae81ff">/12</span>
</span></span><span style="display:flex;"><span><span style="color:#f92672">scheduler</span>: {}
</span></span></code></pre></div><p>请注意观察在kubeadm init 执行过程中的日志输出,错误日志会在此处给出详细信息,如果日志不够细致的话还可以加入 &ndash;v=10 参数并执行;</p>
<p>在其他两台控制面机器上完成节点初始化(本文第一部分)后,如果在kubeadm init时没有选择加上 &ndash;upload-certs 参数,则需要将相关kubernetes证书在控制面机器之间进行拷贝:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>scp /etc/kubernetes/pki/etcd/ca.key [email protected]:/etc/kubernetes/pki/etcd
</span></span><span style="display:flex;"><span>scp /etc/kubernetes/pki/etcd/ca.crt [email protected]:/etc/kubernetes/pki/etcd
</span></span><span style="display:flex;"><span>scp /etc/kubernetes/pki/front-proxy-ca.key [email protected]:/etc/kubernetes/pki
</span></span><span style="display:flex;"><span>scp /etc/kubernetes/pki/front-proxy-ca.crt [email protected]:/etc/kubernetes/pki
</span></span><span style="display:flex;"><span>scp /etc/kubernetes/pki/sa.pub [email protected]:/etc/kubernetes/pki
</span></span><span style="display:flex;"><span>scp /etc/kubernetes/pki/sa.key [email protected]:/etc/kubernetes/pki
</span></span><span style="display:flex;"><span>scp /etc/kubernetes/pki/ca.key [email protected]:/etc/kubernetes/pki
</span></span><span style="display:flex;"><span>scp /etc/kubernetes/pki/ca.crt [email protected]:/etc/kubernetes/pki
</span></span></code></pre></div><p>通过如下命令将此节点作为一个master加入集群,注意参数 &ndash;control-plane 这里说明是作为master进入的:</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span> kubeadm join 10.0.3.105:6443 --token svy3br.1hcjfw953q23ypq3 --discovery-token-ca-cert-hash sha256:032c441fac7307da1867fab25ab77f480318c3ca68fb67977cd8011c3a4aab88 --control-plane
</span></span></code></pre></div><p>高可用集群控制面的两个问题:</p>
<ol>
<li>如果证书没有选择上传集群,则需要提前手动完成证书在master实例节点之间的copy;</li>
<li>需要申请一个SLB将流量转发到多个api server 实例上,在 controlPlaneEndpoint 上体现了这个SLB的对应配置。</li>
</ol>
<h3 id="加入worker节点">加入worker节点</h3>
<p>此刻可以说是完成了控制面的搭建;通过如下命令将worker节点加入集群,注意worker节点并不需要提前scp证书,并且不需要 control-plane 参数;如果执行时日志中有 token 过期相关错误,需要登录到最初的master节点上,通过kubeadm token create创建一个新的token,并在&ndash;token 处使用新的token。</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span> kubeadm join 10.0.3.105:6443 --token svy3br.1hcjfw953q23ypq3 --discovery-token-ca-cert-hash sha256:032c441fac7307da1867fab25ab77f480318c3ca68fb67977cd8011c3a4aab88 --v<span style="color:#f92672">=</span><span style="color:#ae81ff">10</span>
</span></span></code></pre></div><h3 id="master节点的添加及一些注意事项">master节点的添加及一些注意事项</h3>
<p>如果需要新增控制面节点机器,需要在kubeadm配置文件中加入相关ip及dns配置,并通过如下kubeadm init phase命令重新生成相关的api server证书;</p>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span> <span style="color:#75715e"># 新加入控制面机器,如果证书中没有包括相关机器的hostname及ip,需要重新生成对应的api server证书 </span>
</span></span><span style="display:flex;"><span> kubeadm init phase certs apiserver --config kubeadm.yaml
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span> <span style="color:#75715e"># 重新生成cluster-info:</span>
</span></span><span style="display:flex;"><span> kubeadm init phase bootstrap-token
</span></span></code></pre></div><p>此时已完成自建kubernetes集群的初步搭建,对于集群中跨node的通信需求,推荐安装Calico来解决;</p>
<h2 id="参考资料">参考资料</h2>
<h3 id="安装过程中参考了如下文档中的描述">安装过程中参考了如下文档中的描述:</h3>
<ul>
<li>如果要使用外部etcd,请参考如下文档中的创建步骤:https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability/#%E5%A4%96%E9%83%A8-etcd-%E8%8A%82%E7%82%B9</li>
<li>参考安装流程:https://cloud.tencent.com/developer/article/1706627</li>
<li>控制面高可用: <a href="https://zhangguanzhang.github.io/2019/03/11/k8s-ha/#/vip-%E5%8D%95%E7%82%B9%E7%9A%84%E5%9D%91%E4%B9%8B-%E2%80%93advertise-address">https://zhangguanzhang.github.io/2019/03/11/k8s-ha/#/vip-%E5%8D%95%E7%82%B9%E7%9A%84%E5%9D%91%E4%B9%8B-%E2%80%93advertise-address</a></li>
<li>安装 <a href="https://cloud.tencent.com/developer/article/2145554">https://cloud.tencent.com/developer/article/2145554</a></li>
<li>containerd <a href="https://cloud.tencent.com/developer/article/2145554">https://cloud.tencent.com/developer/article/2145554</a></li>
<li>containerd 设置文档 包含了cni <a href="https://github.com/containerd/containerd/blob/main/docs/getting-started.md">https://github.com/containerd/containerd/blob/main/docs/getting-started.md</a></li>
<li>国内环境安装 <a href="https://zhuanlan.zhihu.com/p/46341911">https://zhuanlan.zhihu.com/p/46341911</a></li>
<li>cni插件配置,参考相关文档,解决可能遇到的版本不匹配的问题:https://kubernetes.io/zh-cn/docs/tasks/administer-cluster/migrating-from-dockershim/troubleshooting-cni-plugin-related-errors/#updating-your-cni-plugins-and-cni-config-files</li>
<li>加入master节点之前,需要手动分发证书:参考 <a href="https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability/#manual-certs">https://kubernetes.io/zh-cn/docs/setup/production-environment/tools/kubeadm/high-availability/#manual-certs</a></li>
<li>完成apiserver 证书的检查工作,并更新相关证书更新,并重新生成kube-system中的cluster-info configmap对象:参考:https://cloud.tencent.com/developer/article/1824860</li>
</ul>
<h3 id="kubeadm-join命令中的-discovery-token-ca-cert-hash-对应的值可以通过如下命令获取">kubeadm join命令中的 discovery-token-ca-cert-hash 对应的值可以通过如下命令获取:</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2&gt;/dev/null | <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> openssl dgst -sha256 -hex | sed <span style="color:#e6db74">&#39;s/^.* //&#39;</span>
</span></span></code></pre></div><h3 id="证书相关一些命令备忘">证书相关一些命令备忘:</h3>
<div class="highlight"><pre tabindex="0" style="color:#f8f8f2;background-color:#272822;-moz-tab-size:4;-o-tab-size:4;tab-size:4;"><code class="language-shell" data-lang="shell"><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># 查看证书内容:</span>
</span></span><span style="display:flex;"><span>openssl x509 -in apiserver.crt -text -noout
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span><span style="color:#75715e"># 将新master的IP地址更新入证书的另一种做法:</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>openssl req -new -newkey rsa:4096 -days <span style="color:#ae81ff">3650</span> -nodes -x509 <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> -subj <span style="color:#e6db74">&#34;/C=US/ST=Denial/L=Springfield/O=Dis/CN=kube-apiserver&#34;</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> -keyout apiserver.key <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> -out apiserver.crt <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> -extensions SAN -config &lt;<span style="color:#f92672">(</span>echo <span style="color:#e6db74">&#34;[req]&#34;</span>; echo distinguished_name<span style="color:#f92672">=</span>req; echo <span style="color:#e6db74">&#34;[SAN]&#34;</span>; echo subjectAltName<span style="color:#f92672">=</span>IP:Load_Balancer_IP1,IP:Load_Balancer_IP2<span style="color:#f92672">)</span>
</span></span><span style="display:flex;"><span>
</span></span><span style="display:flex;"><span>openssl req -new -newkey rsa:4096 -days <span style="color:#ae81ff">3650</span> -nodes -x509 <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> -subj <span style="color:#e6db74">&#34;/C=US/ST=Denial/L=Springfield/O=Dis/CN=kube-apiserver&#34;</span> <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> -keyout apiserver.key <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> -out apiserver.crt <span style="color:#ae81ff">\
</span></span></span><span style="display:flex;"><span><span style="color:#ae81ff"></span> -extensions SAN -config &lt;<span style="color:#f92672">(</span>echo <span style="color:#e6db74">&#34;[req]&#34;</span>; echo distinguished_name<span style="color:#f92672">=</span>req; echo <span style="color:#e6db74">&#34;[SAN]&#34;</span>; echo subjectAltName<span style="color:#f92672">=</span>DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local,DNS:master3105,IP:10.96.0.1,IP:10.0.3.105,IP:10.0.4.212<span style="color:#f92672">)</span>
</span></span></code></pre></div>- https://www.lix23.com/posts/new-k8s-cluster/ - all copyright reserved for lixin [email protected] </description>
</item>
<item>
<title>Dnat Redirect Tproxy</title>
<link>https://www.lix23.com/posts/dnat-redirect-tproxy/</link>
<pubDate>Sun, 17 Sep 2023 23:37:09 +0800</pubDate>
<guid>https://www.lix23.com/posts/dnat-redirect-tproxy/</guid>
<description>雪泥鸿爪 https://www.lix23.com/posts/dnat-redirect-tproxy/ -<p>在kubernetes的Service实现中,默认是使用iptables来实现流量路由转发的,iptables 在使用过程中常见的目标(target)有DNAT,REDIRECT 以及 TPROXY,接下来我们看看这几个target之间的异同。</p>
<h2 id="1dnat-模式">1、DNAT 模式</h2>
<p>DNAT 是 Destination Network Address Translation 的缩写,含义是目标地址转换,在使用DNAT进行转发的场景中将会修改传输层数据包 header 中 的目标地址,它是一种低性能损耗的本地路由,一般被称为端口转发。</p>
<p>DNAT 实现原理是在路由规则生效之前修改数据包的目标IP地址或者端口;并需要依赖内核中的 connectin tracking 机制来工作;有了 connection tracking 回包才可以被匹配到,从而返回的数据包中相关地址也可以转换回来。</p>
<p>例子:</p>
<pre tabindex="0"><code>#将12.0.0.254:8080 转换成 192.168.72.10:80
iptables -t nat -A PREROUTING -i ens33 -d 12.0.0.254 -p tcp --dport 8080 -j DNAT --to 192.168.72.10:80
</code></pre><p>这条命令的作用是将 从ens33网卡过来,访问 12.0.0.254:8080 的tcp协议的数据包的目标地址改为 192.168.72.10:80。</p>
<p>调用者实际上并不知道流量最终被路由到了一个内部服务器,他只知道是与他通信的这个网关的地址,并且他的访问得到了正确的response。</p>
<blockquote>
<p>TIP DNAT模式下数据回包如何回到最初的调用者</p>
<p>在 Linux 网络栈中,DNAT(Destination Network Address Translation,目标网络地址转换)是通过 Connection Tracking(conntrack)系统来实现的。这是一种内核机制,用于追踪和记录网络连接的状态信息(如 TCP 流或 UDP 对话),以便在进行网络地址转换(NAT)时能正确地对数据包进行处理。</p>
<p>当一个数据包首次到达网关时,如果该数据包匹配到某个 DNAT 规则,那么 conntrack 系统会在内核中创建一条新的连接追踪记录。这个记录会包含原始的源地址和目标地址(在进行 DNAT 之前的地址),以及转换后的源地址和目标地址(在进行 DNAT 之后的地址)。这个记录会一直存在,直到这个连接结束或超时。</p>
<p>当来自服务器的响应数据包到达网关时,conntrack 系统会使用该数据包的源地址和目标地址(在进行 DNAT 之后的地址)来查找对应的连接追踪记录。如果找到了对应的记录,那么系统就会知道这个数据包是属于哪个连接的,以及在进行 DNAT 之前的源地址和目标地址是什么。然后,系统就可以将这个数据包的源地址和目标地址修改回原始的地址,然后将数据包发送回最初的调用者。</p>
<p>因此,conntrack 系统是保障 DNAT 正常工作的关键。没有 conntrack 系统,网关就无法知道一个响应数据包应该发送给哪个客户端,</p>
</blockquote>
<h2 id="2-redirect-模式">2 、REDIRECT 模式</h2>
<p>REDIRECT 可以被认为是一种特别的DNAT模式,数据包被转到本机的另一个端口,传输层数据包 header 中的地址也会被修改;</p>
<p>例子:</p>
<pre tabindex="0"><code>iptables --table nat --A PREROUTING --protocol tcp --dport 80 --jump REDIRECT --to-ports 8080
</code></pre><p>相对于DNAT的区别是:REDIRECT不需要指定目的IP地址,只能指定目的端口; REDIRECT模式将流量转向本地的另一个socket端口,在这个场景中,服务端不知道调用者的存在,调用者知道被调用者的IP,但调用者并不知道数据包被转发到了另一个socket端口;</p>
<h2 id="3tproxy-透明代理模式">3、TPROXY 透明代理模式</h2>
<p>DNAT 模式 与 REDIRECT 模式的一切路由操作都是在内核中完成,与这二者不同的是, TPROXY 模式是另一种工作模式。工作原理简单解释如下:
在执行 iptables 命令时 目标(Target)为 TPROXY 的情况下会将数据包在不修改传输层Header(IP)的基础上将数据转发至本地的一个套接字;并加上对应的Mark,然后
再通过ip rule 以及 ip route的配合对数据包再做进一步处理后发给一个透明代理;在这种场景下 proxy 所收到的数据包保留了原始的源地址以及目标地址;
透明代理对于调用者client是透明的(不可见的),也就是说调用者client及调用者的os也无需其他配置;在这个场景中,网络参数都可以按照所需的进行设置而不影响proxy代理。
一个 TPROXY 模式的例子如下:</p>
<pre tabindex="0"><code>iptables -t mangle -A PREROUTING -p tcp --dport 80 -j TPROXY --on-port 8080 --tproxy-mark 0x1/0x1
</code></pre><p>这条命令的作用是,将所有目标端口为 80 的 TCP 数据包转送到本地的 8080 端口,并将这些数据包的 &ldquo;mark&rdquo; 设置为 0x1(这个 &ldquo;mark&rdquo; 可以被 ip rule 或 ip route 使用)。
TPROXY 模式与 REDIRECT 模式的区别,REDIRECT会修改传输层header,而 TPROXY 不会对数据包有任何修改,他只是将数据包parcket 转发出去(此处不是内核的数据包forward,只是socket之间的数据包copy)。在工作过程中也不需要connection tracking,数据包的最初的目标端口将作为链接套接字(connection socket)的本地端口。
可以在网关机器,以及最终的服务提供者机器上通过 ss &ndash;tcp -numeric &ndash;process &ndash;listening 命令看到对应的TCP链接建立情况。</p>
<p>在实现透明代理时会将 socket 配置 IP_TRANSPARENT参数: 当我们将 IP_TRANSPARENT 设置为真(true)时,我们可以在一个套接字上接收到并发送任何本地或非本地的 IP 地址和端口的数据包。这对于实现透明代理很有用,因为它允许代理服务器接收并处理原本将发送到其他服务器的数据包。此功能不需要内核态的connection tracking 或者 ip forwarding。</p>
<blockquote>
<p>iptables 支持以下四种表(table):</p>
<ul>
<li>
<p>filter:这是默认的 iptables 表,也是最常用的表。当没有指定 -t 参数时,iptables 会使用 filter 表。filter 表用于过滤数据包,它包含三个内置链(chain):INPUT(用于处理进入本机的数据包)、FORWARD(用于处理通过本机转发的数据包)和 OUTPUT(用于处理由本机发出的数据包)。</p>
</li>
<li>
<p>nat:这个表用于网络地址转换(Network Address Translation, NAT)。它包含三个内置链:PREROUTING(用于在路由决策之前处理数据包)、OUTPUT(用于处理由本机发出的数据包)和 POSTROUTING(用于在路由决策之后处理数据包)。nat 表通常用于修改数据包的源地址或目标地址。</p>
</li>
<li>
<p>mangle:这个表用于特殊的数据包修改,如修改数据包的服务类型(TOS)、修改数据包的时间生存(TTL)等。mangle 表包含五个内置链:PREROUTING、OUTPUT、FORWARD、INPUT 和 POSTROUTING。</p>
</li>
<li>
<p>raw:这个表用于配置不进行连接跟踪(connection tracking)的数据包。它包含两个内置链:PREROUTING 和 OUTPUT。</p>
</li>
</ul>
<p>对于每一种表,你都可以使用 -A(Append)、-D(Delete)、-I(Insert)、-R(Replace)等参数来修改表中的规则。你也可以使用 -L(List)参数来查看表中的规则。例如,iptables -t nat -L 会列出 nat 表中的所有规则。</p>
</blockquote>
<p>参考:https://gsoc-blog.ecklm.com/iptables-redirect-vs.-dnat-vs.-tproxy/</p>
- https://www.lix23.com/posts/dnat-redirect-tproxy/ - all copyright reserved for lixin [email protected] </description>
</item>
<item>
<title> 如何用hugo生成静态网站</title>
<link>https://www.lix23.com/posts/how-to-gen-website/</link>
<pubDate>Fri, 25 Aug 2023 13:08:35 +0800</pubDate>
<guid>https://www.lix23.com/posts/how-to-gen-website/</guid>
<description>雪泥鸿爪 https://www.lix23.com/posts/how-to-gen-website/ -<p>三年前使用hugo搭建了对应的静态blog网站,三年后花了点时间重新捡起来,在这里记录一下hugo使用步骤,也防止以后再忘记;</p>
<ol>
<li>安装 Hugo:首先,你需要在你的计算机上安装 Hugo。根据你的操作系统,安装方法可能会有所不同,我在mac上是通过brew install hugo来进行安装的;</li>
<li>创建网站:通过 hugo new site ${your_website_name} 来创建相关网站元目录;</li>
<li>根据个人需求来选择一个主题 比如diary主题 <a href="https://themes.gohugo.io/themes/hugo-theme-diary/">https://themes.gohugo.io/themes/hugo-theme-diary/</a> ,根据主题的配置样例来修改 hugo.toml 完成网站的个性化配置;</li>
<li>在元目录下执行如下命令来创建对应的博客帖子:hugo new posts/${post_name}.md ,用你喜欢的文件编辑器编辑相关帖子,使用 hugo server 命令来尝试在本地拉起这个网站,并可以通过浏览器来访问网站检查效果;</li>
<li>生成静态网站:当你添加了足够的内容后,在相应的目录下使用以下命令生成静态网站:hugo,这个命令会在 public 目录下生成生成静态网站,把生成的网站内容上传到github相应的repo下即可访问;</li>
</ol>
- https://www.lix23.com/posts/how-to-gen-website/ - all copyright reserved for lixin [email protected] </description>
</item>
<item>
<title>第一篇帖子</title>
<link>https://www.lix23.com/posts/my-very-first-post/</link>
<pubDate>Sat, 18 Jul 2020 08:06:48 +0800</pubDate>
<guid>https://www.lix23.com/posts/my-very-first-post/</guid>
<description>雪泥鸿爪 https://www.lix23.com/posts/my-very-first-post/ -<p>从蝶园搬出来到溪望差不多第三周了吧,so far so good,希望一切都在往好的方向发展,I think I need more time to shape myself, keep fighting!</p>
- https://www.lix23.com/posts/my-very-first-post/ - all copyright reserved for lixin [email protected] </description>
</item>
</channel>
</rss>