diff --git a/blog/release-v1.10/index.mdx b/blog/release-v1.10/index.mdx
index 54a63451a5..83dbe11687 100644
--- a/blog/release-v1.10/index.mdx
+++ b/blog/release-v1.10/index.mdx
@@ -4,9 +4,9 @@ authors:
categories:
- General
- Announcements
-date: 2023-08-25
+date: 2022-03-07
draft: false
-lastmod: 2023-08-25
+lastmod: 2022-03-07
summary: KubeEdge v1.10 is live!
tags:
- KubeEdge
diff --git a/blog/release-v1.11/index.mdx b/blog/release-v1.11/index.mdx
index 33bee64caa..f72f66ab8b 100644
--- a/blog/release-v1.11/index.mdx
+++ b/blog/release-v1.11/index.mdx
@@ -4,9 +4,9 @@ authors:
categories:
- General
- Announcements
-date: 2023-10-25
+date: 2022-06-21
draft: false
-lastmod: 2023-10-25
+lastmod: 2022-06-21
summary: KubeEdge v1.11 is live!
tags:
- KubeEdge
diff --git a/blog/release-v1.12/index.mdx b/blog/release-v1.12/index.mdx
index ec5a983122..d6b636568a 100644
--- a/blog/release-v1.12/index.mdx
+++ b/blog/release-v1.12/index.mdx
@@ -5,9 +5,9 @@ categories:
- General
- Announcements
- Releases
-date: 2023-05-15
+date: 2022-09-29
draft: false
-lastmod: 2023-05-15
+lastmod: 2022-09-29
summary: KubeEdge v1.12 is live!
tags:
- KubeEdge
diff --git a/blog/release-v1.13/index.mdx b/blog/release-v1.13/index.mdx
index 0f79532618..b66d56de42 100644
--- a/blog/release-v1.13/index.mdx
+++ b/blog/release-v1.13/index.mdx
@@ -4,9 +4,9 @@ authors:
categories:
- General
- Announcements
-date: 2023-01-23
+date: 2023-01-18
draft: false
-lastmod: 2023-01-23
+lastmod: 2023-01-18
summary: KubeEdge v1.13 is live!
tags:
- KubeEdge
diff --git a/blog/release-v1.14/index.mdx b/blog/release-v1.14/index.mdx
index 02d94c30b2..6ba657ffc6 100644
--- a/blog/release-v1.14/index.mdx
+++ b/blog/release-v1.14/index.mdx
@@ -4,9 +4,9 @@ authors:
categories:
- General
- Announcements
-date: 2023-05-15
+date: 2023-07-01
draft: false
-lastmod: 2023-05-15
+lastmod: 2023-07-01
summary: KubeEdge v1.14 is live!
tags:
- KubeEdge
diff --git a/docs/setup/install-with-keadm.md b/docs/setup/install-with-keadm.md
index 8b585bf53a..280187d928 100644
--- a/docs/setup/install-with-keadm.md
+++ b/docs/setup/install-with-keadm.md
@@ -2,51 +2,54 @@
title: Installing KubeEdge with Keadm
sidebar_position: 3
---
-Keadm is used to install the cloud and edge components of KubeEdge. It is not responsible for installing K8s and runtime.
-Please refer [kubernetes-compatibility](https://github.com/kubeedge/kubeedge#kubernetes-compatibility) to get **Kubernetes compatibility** and determine what version of Kubernetes would be installed.
+Keadm is used to install the cloud and edge components of KubeEdge. It does not handle the installation of Kubernetes and its [runtime environment](https://kubeedge.io/docs/setup/prerequisites/runtime).
-## Limitation
+Please refer to [Kubernetes compatibility](https://github.com/kubeedge/kubeedge#kubernetes-compatibility) documentation to check **Kubernetes compatibility** and ascertain the Kubernetes version to be installed.
-- Need super user rights (or root rights) to run.
+## Prerequisite
+- It Requires super user rights (or root rights) to run.
-## Install keadm
+## Install Keadm
-There're three ways to download a `keadm` binary
+There're three ways to download the `keadm` binary:
-- Download from [github release](https://github.com/kubeedge/kubeedge/releases).
+1. Download from [GitHub release](https://github.com/kubeedge/kubeedge/releases).
- Now KubeEdge github officially holds three arch releases: amd64, arm, arm64. Please download the right arch package according to your platform, with your expected version.
+ KubeEdge GitHub officially holds three architecture releases: amd64, arm, and arm64. Please download the correct package according to your platform and desired version.
+
```shell
- wget https://github.com/kubeedge/kubeedge/releases/download/v1.12.1/keadm-v1.12.1-linux-amd64.tar.gz
- tar -zxvf keadm-v1.12.1-linux-amd64.tar.gz
- cp keadm-v1.12.1-linux-amd64/keadm/keadm /usr/local/bin/keadm
+ wget https://github.com/kubeedge/kubeedge/releases/download/v1.17.0/keadm-v1.17.0-linux-amd64.tar.gz
+ tar -zxvf keadm-v1.17.0-linux-amd64.tar.gz
+ cp keadm-1.17.0-linux-amd64/keadm/keadm /usr/local/bin/keadm
```
-- Download from dockerhub KubeEdge official release image.
+
+2. Download from the official KubeEdge release image on Docker Hub.
```shell
- docker run --rm kubeedge/installation-package:v1.12.1 cat /usr/local/bin/keadm > /usr/local/bin/keadm && chmod +x /usr/local/bin/keadm
+ docker run --rm kubeedge/installation-package:v1.17.0 cat /usr/local/bin/keadm > /usr/local/bin/keadm && chmod +x /usr/local/bin/keadm
```
-- Build from source
+3. Build from Source
- ref: [build from source](./install-with-binary#build-from-source)
-
+- Refer to [build from source](./install-with-binary#build-from-source) for instructions.
## Setup Cloud Side (KubeEdge Master Node)
-By default ports `10000` and `10002` in your cloudcore needs to be accessible for your edge nodes.
+By default, ports `10000` and `10002` on your CloudCore needs to be accessible for your edge nodes.
+
+**IMPORTANT NOTES:**
-**IMPORTANT NOTE:**
+1. At least one of `kubeconfig` or `master` must be configured correctly to verify the version and other information of the Kubernetes cluster.
-1. At least one of kubeconfig or master must be configured correctly, so that it can be used to verify the version and other info of the k8s cluster.
-2. Please make sure edge node can connect cloud node using local IP of cloud node, or you need to specify public IP of cloud node with `--advertise-address` flag.
-3. `--advertise-address` is the address exposed by the cloud side (will be added to the SANs of the CloudCore certificate), the default value is the local IP.
+2. Ensure the edge node can connect to the cloud node using the local IP of cloud node, or specify the public IP of the cloud node with the `--advertise-address` flag.
+
+3. `--advertise-address` is the address exposed by the cloud side (it will be added to the SANs of the CloudCore certificate). The default value is the local IP.
### keadm init
-`keadm init` provides a solution for integrating Cloudcore helm chart. Cloudcore will be deployed to cloud nodes in container mode.
+`keadm init` provides a solution for integrating the CloudCore Helm chart. CloudCore will be deployed to cloud nodes in container mode.
Example:
@@ -55,6 +58,7 @@ keadm init --advertise-address="THE-EXPOSED-IP" --profile version=v1.12.1 --kube
```
Output:
+
```shell
Kubernetes version verification passed, KubeEdge installation will start...
CLOUDCORE started
@@ -66,7 +70,8 @@ STATUS: deployed
REVISION: 1
```
-You can run `kubectl get all -n kubeedge` to ensure that cloudcore start successfully just like below.
+You can run `kubectl get all -n kubeedge` to ensure that CloudCore start successfully, as shown below.
+
```shell
# kubectl get all -n kubeedge
NAME READY STATUS RESTARTS AGE
@@ -82,11 +87,13 @@ NAME DESIRED CURRENT READY AGE
replicaset.apps/cloudcore-56b8454784 1 1 1 46s
```
-**IMPORTANT NOTE:**
+**IMPORTANT NOTES:**
+
+1. Set flags `--set key=value` for CloudCore helm chart could refer to [KubeEdge CloudCore Helm Charts README.md](https://github.com/kubeedge/kubeedge/blob/master/manifests/charts/cloudcore/README.md).
-1. Set flags `--set key=value` for cloudcore helm chart could refer to [KubeEdge Cloudcore Helm Charts README.md](https://github.com/kubeedge/kubeedge/blob/master/manifests/charts/cloudcore/README.md).
2. You can start with one of Keadm’s built-in configuration profiles and then further customize the configuration for your specific needs. Currently, the built-in configuration profile keyword is `version`. Refer to [version.yaml](https://github.com/kubeedge/kubeedge/blob/master/manifests/profiles/version.yaml) as `values.yaml`, you can make your custom values file here, and add flags like `--profile version=v1.9.0 --set key=value` to use this profile. `--external-helm-root` flag provides a feature function to install the external helm charts like edgemesh.
-3. `keadm init` deploy cloudcore in container mode, if you want to deploy cloudcore as binary, please ref [`keadm deprecated init`](#keadm-deprecated-init) below.
+
+3. `keadm init` by default, deploys CloudCore in container mode. If you want to deploy CloudCore as a binary, please refer to [`keadm deprecated init`](#keadm-deprecated-init).
Example:
@@ -94,7 +101,7 @@ Example:
keadm init --set server.advertiseAddress="THE-EXPOSED-IP" --set server.nodeName=allinone --kube-config=/root/.kube/config --force --external-helm-root=/root/go/src/github.com/edgemesh/build/helm --profile=edgemesh
```
-If you are familiar with the helm chart installation, please refer to [KubeEdge Helm Charts](https://github.com/kubeedge/kubeedge/tree/master/manifests/charts).
+If you are familiar with the Helm chart installation, please refer to [KubeEdge Helm Charts](https://github.com/kubeedge/kubeedge/tree/master/manifests/charts).
**SPECIAL SCENARIO:**
@@ -109,24 +116,27 @@ To handle kube-proxy, you can refer to the [two methods](#anchor-name) mentioned
### keadm manifest generate
-You can also get the manifests with `keadm manifest generate`.
+You can generate the manifests using `keadm manifest generate`.
Example:
```shell
keadm manifest generate --advertise-address="THE-EXPOSED-IP" --kube-config=/root/.kube/config > kubeedge-cloudcore.yaml
```
-> Add --skip-crds flag to skip outputing the CRDs
+
+> Add `--skip-crds` flag to skip outputting the CRDs.
### keadm deprecated init
-`keadm deprecated init` will install cloudcore in binary process, generate the certs and install the CRDs. It also provides a flag by which a specific version can be set.
+`keadm deprecated init` installs CloudCore in binary process, generates certificates, and installs the CRDs. It also provides a flag to set a specific version.
-**IMPORTANT NOTE:**
+**IMPORTANT NOTES:**
-1. At least one of kubeconfig or master must be configured correctly, so that it can be used to verify the version and other info of the k8s cluster.
-2. Please make sure edge node can connect cloud node using local IP of cloud node, or you need to specify public IP of cloud node with `--advertise-address` flag.
-3. `--advertise-address` is the address exposed by the cloud side (will be added to the SANs of the CloudCore certificate), the default value is the local IP.
+1. At least one of `kubeconfig` or `master` must be configured correctly to verify the version and other information of the Kubernetes cluster.
+
+2. Ensure the edge node can connect to the cloud node using the local IP of cloud node, or specify the public IP of the cloud node with the `--advertise-address` flag.
+
+3. `--advertise-address` is the address exposed by the cloud side (it will be added to the SANs of the CloudCore certificate). The default value is the local IP.
Example:
```shell
@@ -141,7 +151,8 @@ keadm manifest generate --advertise-address="THE-EXPOSED-IP" --kube-config=/root
CloudCore started
```
- You can run `ps -elf | grep cloudcore` command to ensure that cloudcore is running successfully.
+ You can run the `ps -elf | grep cloudcore` command to ensure that Cloudcore is running successfully.
+
```shell
# ps -elf | grep cloudcore
0 S root 2736434 1 1 80 0 - 336281 futex_ 11:02 pts/2 00:00:00 /usr/local/bin/cloudcore
@@ -152,7 +163,7 @@ keadm manifest generate --advertise-address="THE-EXPOSED-IP" --kube-config=/root
### Get Token From Cloud Side
-Run `keadm gettoken` in **cloud side** will return the token, which will be used when joining edge nodes.
+Run `keadm gettoken` on the **cloud side** to retrieve the token, which will be used when joining edge nodes.
```shell
# keadm gettoken
@@ -162,7 +173,8 @@ Run `keadm gettoken` in **cloud side** will return the token, which will be used
### Join Edge Node
#### keadm join
-`keadm join` will install edgecore. It also provides a flag by which a specific version can be set. It will pull image [kubeedge/installation-package](https://hub.docker.com/r/kubeedge/installation-package) from dockerhub and copy binary `edgecore` from container to hostpath, and then start `edgecore` as a system service.
+
+`keadm join` installs EdgeCore. It also provides a flag to set a specific version. It pulls the image [kubeedge/installation-package](https://hub.docker.com/r/kubeedge/installation-package) from Docker Hub, copies the `edgecore` binary from container to the hostpath, and then starts `edgecore` as a system service.
Example:
@@ -170,10 +182,13 @@ Example:
keadm join --cloudcore-ipport="THE-EXPOSED-IP":10000 --token=27a37ef16159f7d3be8fae95d588b79b3adaaf92727b72659eb89758c66ffda2.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTAyMTYwNzd9.JBj8LLYWXwbbvHKffJBpPd5CyxqapRQYDIXtFZErgYE --kubeedge-version=v1.12.1
```
-**IMPORTANT NOTE:**
-1. `--cloudcore-ipport` flag is a mandatory flag.
-2. If you want to apply certificate for edge node automatically, `--token` is needed.
-3. The kubeEdge version used in cloud and edge side should be same.
+**IMPORTANT NOTES:**
+
+1. The `--cloudcore-ipport` flag is mandatory.
+
+2. If you want to apply certificate for the edge node automatically, the `--token` is needed.
+
+3. The KubeEdge version used on the cloud and edge sides should be the same.
Output:
@@ -182,7 +197,8 @@ Output:
KubeEdge edgecore is running, For logs visit: journalctl -u edgecore.service -xe
```
-you can run `systemctl status edgecore` command to ensure edgecore is running successfully
+You can run the `systemctl status edgecore` command to ensure EdgeCore is running successfully:
+
```shell
# systemctl status edgecore
● edgecore.service
@@ -195,14 +211,17 @@ you can run `systemctl status edgecore` command to ensure edgecore is running su
```
#### keadm deprecated join
-You can also use `keadm deprecated join` to start edgecore from release pacakge. It will download release packages from [KubeEdge release website](https://github.com/kubeedge/kubeedge/releases), and then start `edgecore` in binary progress.
+
+You can also use `keadm deprecated join` to start EdgeCore from the release pacakge. It will download release packages from [KubeEdge release website](https://github.com/kubeedge/kubeedge/releases), and then start `edgecore` in binary progress.
Example:
+
```shell
keadm deprecated join --cloudcore-ipport="THE-EXPOSED-IP":10000 --token=27a37ef16159f7d3be8fae95d588b79b3adaaf92727b72659eb89758c66ffda2.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTAyMTYwNzd9.JBj8LLYWXwbbvHKffJBpPd5CyxqapRQYDIXtFZErgYE --kubeedge-version=1.12.0
```
Output:
+
```shell
MQTT is installed in this host
...
@@ -210,59 +229,63 @@ KubeEdge edgecore is running, For logs visit: journalctl -u edgecore.service -xe
```
### Deploy demo on edge nodes
-ref: [Deploy demo on edge nodes](./install-with-binary#deploy-demo-on-edge-nodes)
+
+Refer to the [Deploy demo on edge nodes](./install-with-binary#deploy-demo-on-edge-nodes) documentation.
### Enable `kubectl logs` Feature
-Before deploying metrics-server , `kubectl logs` feature must be activated:
+Before deploying the metrics-server, the `kubectl logs` feature must be activated:
-> Note that if cloudcore is deployed using helm:
-> - The stream certs are generated automatically and cloudStream feature is enabled by default. So, step 1-3 could
- be skipped unless customization is needed.
-> - Also, step 4 could be finished by iptablesmanager component by default, manually operations are not needed.
- Refer to the [cloudcore helm values](https://github.com/kubeedge/kubeedge/blob/master/manifests/charts/cloudcore/values.yaml#L67).
-> - Operations in step 5-6 related to cloudcore could also be skipped.
+> Note for Helm deployments:
+> - Stream certificates are generated automatically and the CloudStream feature is enabled by default. Therefore, Steps 1-3 can be skipped unless customization is needed.
+> - Step 4 could be finished by iptablesmanager component by default, so manual operations are not needed. Refer to the [cloudcore helm values](https://github.com/kubeedge/kubeedge/blob/master/manifests/charts/cloudcore/values.yaml#L67).
+> - Operations in Steps 5-6 related to CloudCore can also be skipped.
-1. Make sure you can find the kubernetes `ca.crt` and `ca.key` files. If you set up your kubernetes cluster by `kubeadm` , those files will be in `/etc/kubernetes/pki/` dir.
+1. Ensure you can locate the Kubernetes `ca.crt` and `ca.key` files. If you set up your Kubernetes cluster with `kubeadm`, these files will be in the `/etc/kubernetes/pki/` directory.
``` shell
ls /etc/kubernetes/pki/
```
-2. Set `CLOUDCOREIPS` env. The environment variable is set to specify the IP address of cloudcore, or a VIP if you have a highly available cluster.
- Set `CLOUDCORE_DOMAINS` instead if Kubernetes uses domain names to communicate with cloudcore.
+2. Set the `CLOUDCOREIPS` environment variable to specify the IP address of CloudCore, or a VIP if you have a highly available cluster. Set `CLOUDCORE_DOMAINS` instead if Kubernetes uses domain names to communicate with CloudCore.
```bash
export CLOUDCOREIPS="192.168.0.139"
```
- (Warning: the same **terminal** is essential to continue the work, or it is necessary to type this command again.) Checking the environment variable with the following command:
+
+ (Warning: the same **terminal** is essential to continue the work, or it is necessary to type this command again). You can check the environment variable with the following command:
+
``` shell
echo $CLOUDCOREIPS
```
-3. Generate the certificates for **CloudStream** on cloud node, however, the generation file is not in the `/etc/kubeedge/`, we need to copy it from the repository which was git cloned from GitHub.
- Change user to root:
+3. Generate the certificates for **CloudStream** on the cloud node. The generation file is not in `/etc/kubeedge/`, so it needs to be copied from the repository cloned from GitHub. Switch to the root user:
+
```shell
sudo su
```
- Copy certificates generation file from original cloned repository:
+
+ Copy the certificate generation file from the original cloned repository:
+
```shell
cp $GOPATH/src/github.com/kubeedge/kubeedge/build/tools/certgen.sh /etc/kubeedge/
```
+
Change directory to the kubeedge directory:
+
```shell
cd /etc/kubeedge/
```
+
Generate certificates from **certgen.sh**
```bash
/etc/kubeedge/certgen.sh stream
```
-4. It is needed to set iptables on the host. (This command should be executed on every apiserver deployed node.)(In this case, this the master node, and execute this command by root.)
- Run the following command on the host on which each apiserver runs:
+4. It is needed to set iptables on the host. (This command should be executed on every apiserver deployed node.)(In this case, this the master node, and execute this command by root.) Run the following command on the host where each apiserver runs:
- **Note:** You need to get the configmap first, which contains all the cloudcore ips and tunnel ports.
+ **Note:** First, get the configmap containing all the CloudCore IPs and tunnel ports:
```bash
kubectl get cm tunnelport -nkubeedge -oyaml
@@ -276,7 +299,7 @@ Before deploying metrics-server , `kubectl logs` feature must be activated:
...
```
- Then set all the iptables for multi cloudcore instances to every node that apiserver runs. The cloudcore ips and tunnel ports should be get from configmap above.
+ Then set all the iptables for multi CloudCore instances to every node that apiserver runs. The cloudcore ips and tunnel ports should be obtained from the configmap above.
```bash
iptables -t nat -A OUTPUT -p tcp --dport $YOUR-TUNNEL-PORT -j DNAT --to $YOUR-CLOUDCORE-IP:10003
@@ -284,22 +307,24 @@ Before deploying metrics-server , `kubectl logs` feature must be activated:
iptables -t nat -A OUTPUT -p tcp --dport 10351 -j DNAT --to 192.168.1.17:10003
```
- If you are not sure if you have setting of iptables, and you want to clean all of them.
- (If you set up iptables wrongly, it will block you out of your `kubectl logs` feature)
+ If you are unsure about the current iptables settings and want to clean all of them. (If you set up iptables wrongly, it will block you out of your `kubectl logs` feature)
+
The following command can be used to clean up iptables:
+
``` shell
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
```
-
5. Modify **both** `/etc/kubeedge/config/cloudcore.yaml` and `/etc/kubeedge/config/edgecore.yaml` on cloudcore and edgecore. Set up **cloudStream** and **edgeStream** to `enable: true`. Change the server IP to the cloudcore IP (the same as $CLOUDCOREIPS).
- Open the YAML file in cloudcore:
+ Open the YAML file in CloudCore:
+
```shell
sudo nano /etc/kubeedge/config/cloudcore.yaml
```
Modify the file in the following part (`enable: true`):
+
```yaml
cloudStream:
enable: true
@@ -313,11 +338,14 @@ Before deploying metrics-server , `kubectl logs` feature must be activated:
tunnelPort: 10004
```
- Open the YAML file in edgecore:
+ Open the YAML file in EdgeCore:
+
``` shell
sudo nano /etc/kubeedge/config/edgecore.yaml
```
+
Modify the file in the following part (`enable: true`), (`server: 192.168.0.193:10004`):
+
``` yaml
edgeStream:
enable: true
@@ -330,29 +358,38 @@ Before deploying metrics-server , `kubectl logs` feature must be activated:
writeDeadline: 15
```
-6. Restart all the cloudcore and edgecore.
+6. Restart all the CloudCore and EdgeCore.
``` shell
sudo su
```
- cloudCore in process mode:
+
+ If CloudCore is running in process mode:
+
``` shell
pkill cloudcore
nohup cloudcore > cloudcore.log 2>&1 &
```
- or cloudCore in kubernetes deployment mode:
+
+ If CloudCore is running in Kubernetes deployment mode:
+
``` shell
kubectl -n kubeedge rollout restart deployment cloudcore
```
- edgeCore:
+
+ EdgeCore:
+
``` shell
systemctl restart edgecore.service
```
- If you fail to restart edgecore, check if that is because of `kube-proxy` and kill it. **kubeedge** reject it by default, we use a succedaneum called [edgemesh](https://github.com/kubeedge/kubeedge/blob/master/docs/proposals/edgemesh-design.md)
- **Note:** the importance is to avoid `kube-proxy` being deployed on edgenode. There are two methods to solve it:
- 1. Add the following settings by calling `kubectl edit daemonsets.apps -n kube-system kube-proxy`:
+ If restarting EdgeCore fails, check if that is due to `kube-proxy` and kill it. **kubeedge** rejects it by default, we use a succedaneum called [edgemesh](https://github.com/kubeedge/kubeedge/blob/master/docs/proposals/edgemesh-design.md)
+
+ **Note:** It is important to avoid `kube-proxy` being deployed on edgenode and there are two methods to achieve this:
+
+ - **Method 1:** Add the following settings by calling `kubectl edit daemonsets.apps -n kube-system kube-proxy`:
+
``` yaml
spec:
template:
@@ -365,24 +402,26 @@ Before deploying metrics-server , `kubectl logs` feature must be activated:
- key: node-role.kubernetes.io/edge
operator: DoesNotExist
```
- or just run the below command directly in the shell window:
+
+ or just run the following command directly in the shell window:
+
```shell
kubectl patch daemonset kube-proxy -n kube-system -p '{"spec": {"template": {"spec": {"affinity": {"nodeAffinity": {"requiredDuringSchedulingIgnoredDuringExecution": {"nodeSelectorTerms": [{"matchExpressions": [{"key": "node-role.kubernetes.io/edge", "operator": "DoesNotExist"}]}]}}}}}}}'
```
- 2. If you still want to run `kube-proxy`, ask **edgecore** not to check the environment by adding the env variable in `edgecore.service` :
+ - **Method 2:** If you still want to run `kube-proxy`, instruct **edgecore** not to check the environment by adding the environment variable in `edgecore.service` :
``` shell
sudo vi /etc/kubeedge/edgecore.service
```
- - Add the following line into the **edgecore.service** file:
+ Add the following line into the **edgecore.service** file:
``` shell
Environment="CHECK_EDGECORE_ENVIRONMENT=false"
```
- - The final file should look like this:
+ The final file should look like this:
```
Description=edgecore.service
@@ -397,6 +436,7 @@ Before deploying metrics-server , `kubectl logs` feature must be activated:
```
### Support Metrics-server in Cloud
+
1. The realization of this function point reuses cloudstream and edgestream modules. So you also need to perform all steps of *Enable `kubectl logs` Feature*.
2. Since the kubelet ports of edge nodes and cloud nodes are not the same, the current release version of metrics-server(0.3.x) does not support automatic port identification (It is the 0.4.0 feature), so you need to manually compile the image from master branch yourself now.
@@ -442,7 +482,7 @@ Before deploying metrics-server , `kubectl logs` feature must be activated:
```
iptables -t nat -A OUTPUT -p tcp --dport 10350 -j DNAT --to $CLOUDCOREIPS:10003
```
- (To direct the request for metric-data from edgecore:10250 through tunnel between cloudcore and edgecore, the iptables is vitally important.)
+ (To direct the request for metric-data from edgecore:10250 through tunnel between CloudCore and EdgeCore, the iptables is vitally important.)
Before you deploy metrics-server, you have to make sure that you deploy it on the node which has apiserver deployed on. In this case, that is the master node. As a consequence, it is needed to make master node schedulable by the following command:
@@ -468,7 +508,8 @@ Before deploying metrics-server , `kubectl logs` feature must be activated:
- charlie-latest
```
-**IMPORTANT NOTE:**
+**IMPORTANT NOTES:**
+
1. Metrics-server needs to use hostnetwork network mode.
2. Use the image compiled by yourself and set imagePullPolicy to Never.
@@ -517,4 +558,5 @@ It provides a flag for users to specify kubeconfig path, the default path is `/r
```
### Node
-`keadm reset` or `keadm deprecated reset` will stop `edgecore` and it doesn't uninstall/remove any of the pre-requisites.
+
+`keadm reset` or `keadm deprecated reset` will stop `edgecore` and it doesn't uninstall/remove any of the pre-requisites.
\ No newline at end of file
diff --git a/docusaurus.config.js b/docusaurus.config.js
index 2fbefafa79..b3b1a7b25b 100644
--- a/docusaurus.config.js
+++ b/docusaurus.config.js
@@ -136,7 +136,7 @@ const config = {
logo: {
src: "img/avatar.png",
target: "_self",
- href: "https://kubeedge.io",
+ href: "/",
},
items: [
{
diff --git a/i18n/zh/docusaurus-plugin-content-pages/case-studies/Raisecom-Tech/index.mdx b/i18n/zh/docusaurus-plugin-content-pages/case-studies/Raisecom-Tech/index.mdx
new file mode 100644
index 0000000000..f6335637c0
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-pages/case-studies/Raisecom-Tech/index.mdx
@@ -0,0 +1,26 @@
+---
+date: 2024-05-27
+title: 瑞斯康达科技股份有限公司
+subTitle:
+description: 采用KubeEdge作为智能监控方案实施的重要组成部分,有效完成了对工厂安全的AI监控,减少了安全事故的发生,提高了工厂的生产效率。
+tags:
+ - 用户案例
+---
+
+# 基于KubeEdge的智能监控方案
+
+## 挑战
+
+保障工业生产安全是瑞斯康达制造工厂的重要需求,传统工人的生产安全检测方式采用人工方式,速度慢、效率低,工人不遵守安全要求的情况仍时有发生,且容易被忽视,具有很大的安全隐患,影响工厂的生产效率。
+
+## 解决方案
+
+开发基于人工智能算法的工业智能监控应用,以取代人工监控。但仅有智能监控应用是不够的,智能边缘应用的部署和管理、云端训练与边缘推理的协同等新问题也随之出现,成为该解决方案在工业生产环境中大规模应用的瓶颈。
+
+中国电信研究院将KubeEdge作为智能监控方案实施的重要组成部分,帮助瑞斯康达科技解决该问题。中国电信研究院架构师Xiaohou Shi完成了该方案的设计。该案例通过工业视觉应用,结合深度学习算法,实时监控工厂工人的安全状态。引入KubeEdge作为边缘计算平台,用于管理边缘设备和智能监控应用的运行环境。通过KubeEdge,可以在云端对监控模型进行训练,并自动部署到边缘节点进行推理执行,提高运营效率,降低运维成本。
+
+## 优势
+
+在此应用场景中,KubeEdge完成了边缘应用的统一管理,同时KubeEdge还可以充分利用云边协同的优势,借助KubeEdge作为边缘计算平台,有效完成了对工厂安全的AI监控,减少了安全事故的发生,提高了工厂的生产效率。
+
+基于此成功案例,未来将在KubeEdge上部署更多深度学习算法,解决边缘计算方面的问题,未来也将与KubeEdge开展更多场景化工业智能应用的合作。
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-pages/case-studies/XingHai/index.mdx b/i18n/zh/docusaurus-plugin-content-pages/case-studies/XingHai/index.mdx
new file mode 100644
index 0000000000..6aa505e6fe
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-pages/case-studies/XingHai/index.mdx
@@ -0,0 +1,30 @@
+---
+date: 2024-05-27
+title: 兴海物联科技有限公司
+subTitle:
+description: 兴海物联采用KubeEdge构建了云边端协同的智慧校园,大幅提升了校园管理效率。
+tags:
+ - 用户案例
+---
+
+# 基于KubeEdge构建智慧校园
+
+## 挑战
+
+兴海物联是一家利用建筑物联网平台、智能硬件、人工智能等技术,提供智慧楼宇综合解决方案的物联网企业,是中海物业智慧校园标准的制定者和践行者,是华为智慧校园解决方案核心全链条服务商。
+
+该公司服务客户遍及中国及全球80个主要城市,已交付项目741个,总建筑面积超过1.56亿平方米,业务涵盖高端住宅、商业综合体、超级写字楼、政府物业、工业园区等多种建筑类型。
+
+近年来,随着业务的拓展和园区业主对服务品质要求的不断提升,兴海物联致力于利用边缘计算和物联网技术构建可持续发展的智慧校园,提高园区运营和管理效率。
+
+## 解决方案
+
+如今兴海物联的服务领域越来越广泛,因此其解决方案需要具备可移植性和可复制性,需要保证数据的实时处理和安全的存储。KubeEdge以云原生开发和边云协同为设计理念,已成为兴海物联打造智慧校园不可或缺的一部分。
+
+- 容器镜像一次构建,随处运行,有效降低新建园区部署运维复杂度。
+- 边云协同使数据在边缘处理,确保实时性和安全性,并降低网络带宽成本。
+- KubeEdge 可以轻松添加硬件,并支持常见协议。无需二次开发。
+
+## 优势
+
+兴海物联基于KubeEdge和自有兴海物联云平台,构建了云边端协同的智慧校园,大幅提升了校园管理效率。在AI的助力下,近30%的重复性工作实现了自动化。未来,兴海物联还将继续与KubeEdge合作,推出基于KubeEdge的智慧校园解决方案。
\ No newline at end of file
diff --git a/i18n/zh/docusaurus-plugin-content-pages/case-studies/jingying/index.mdx b/i18n/zh/docusaurus-plugin-content-pages/case-studies/jingying/index.mdx
new file mode 100644
index 0000000000..7ae66437d9
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-pages/case-studies/jingying/index.mdx
@@ -0,0 +1,30 @@
+---
+date: 2024-05-28
+title: 精英数智科技股份有限公司
+subTitle:
+description: 精英数智科技与KubeEdge合作开发矿脑解决方案,覆盖云、边、端,让煤炭生产更安全。
+tags:
+ - 解决方案
+---
+
+# 基于KubeEdge的矿山大脑解决方案
+
+## 商业背景
+
+精英数智科技有限公司专注于为煤矿及瓦斯企业提供安全监控管理解决方案,提供可靠稳定的数据采集传输、现场感知、风险预测、智能监管等解决方案,帮助企业提高生产安全性,降低管理成本。
+精英数智科技利用AIoT和云边端协同,构建高危行业安全生产智能感知网络,推动新一代信息技术与安全生产的深度融合。
+
+## 解决方案
+
+精英数智科技有限公司与KubeEdge合作开发了矿山大脑解决方案,该方案覆盖云、边、端,让煤炭生产更安全。该方案具有以下优势:
+
+- KubeEdge兼容Kubernetes生态,支持Kubernetes应用平滑迁移到KubeEdge,大幅提升部署效率。
+- AI模型在云端训练,模型推理在边缘进行,大大提高资源利用率和推理速度。
+- 即使边缘节点与云端断开连接,服务实例也能自动恢复并正常运行,使系统更加可靠。
+- 边缘智能、强大的计算能力以及对海量边缘设备的管理,使得多种场景的精准音视频识别成为可能。
+精英数智科技有限公司在多年积累的基础上,具备了丰富的AI场景能力和云边端运维能力,有效保障了服务的可靠和识别的精准。
+
+## 优势
+
+山西煤矿企业通过矿山大脑解决方案,已实现千余座矿井的智能化开采,云端下发的AI分析算法实时风险评估,识别率高达98%,远程IT基础设施集中监控降低运维成本65%,全栈IT设备集成部署降低部署成本75%,
+矿山大脑助力煤炭行业安全生产,最终实现全行业智能化升级。精英数智科技有限公司将继续与KubeEdge携手,利用AI、IoT、大数据等技术,为煤炭行业安全生产推出全方位的智能边缘解决方案。
\ No newline at end of file
diff --git a/src/components/supporters/index.js b/src/components/supporters/index.js
index 98d3452e59..4e3c23b543 100644
--- a/src/components/supporters/index.js
+++ b/src/components/supporters/index.js
@@ -207,6 +207,12 @@ const supportList = [
name: "SF Technology",
img_src: "img/supporters/sf-tech.png",
external_link: "https://www.sf-tech.com.cn/",
+ },
+
+ {
+ name: "LookCan Ai",
+ img_src: "img/supporters/lookcan-logo.svg",
+ external_link: "https://www.lookcan.ai/",
}
];
diff --git a/src/pages/case-studies/Raisecom-Tech/index.mdx b/src/pages/case-studies/Raisecom-Tech/index.mdx
new file mode 100644
index 0000000000..847962c07f
--- /dev/null
+++ b/src/pages/case-studies/Raisecom-Tech/index.mdx
@@ -0,0 +1,24 @@
+---
+date: 2024-05-27
+title: Raisecom Technology CO.,Ltd
+subTitle:
+description: Using KubeEdge as an important part of the implementation of the intelligent monitoring solution effectively completes the AI monitoring of factory safety, reduces the occurrence of safety accidents, and improves the production efficiency of the factory.
+
+tags:
+ - UserCase
+---
+
+# Intelligent monitoring solution based on KubeEdge
+
+## Challenge
+It is an important demand for the manufactory of Raisecom Technology to ensure the industrial production safety. Traditional workers' production safety was detected manually, which was slow and inefficient. The situation that workers did not obey the safety requirements still happened, and it could be ignored sometimes, which could generate great safety risks and affect the production efficiency of the factory.
+
+## Solution
+An industrial intelligent monitoring application with AI algorithms was developed to replace the manual method. An intelligent application alone was not enough and new problems arose such as the deployment and management of the intelligent edge application and the collaboration between training on the cloud and reasoning on the edge, which could become a bottleneck for the largescale application of the solution in the industrial production environment.
+
+China Telecom Research Institute used KubeEdge as an important part of the implementation of the intelligent monitoring solution to help Raisecom Technology to solve the problem. Architect Xiaohou Shi from China Telecom Research Institute completed the design of this solution. In this case, the safety status of factory workers was monitored by the industrial vision application in real time with the deep learning algorithm. KubeEdge was introduced as an edge computing platform for the management of the edge devices and the running environment of the intelligent monitoring application. The monitoring model could be trained on the cloud and deployed to the edge nodes for reasoning execution automatically via KubeEdge, which could improve the efficiency of the operation and reduce the cost of the maintenance.
+
+## Impact
+In this application scenario, KubeEdge completed the unified management of edge applications. KubeEdge could also make full use of the advantages of the collaboration of the cloud and edge. With the help of KubeEdge as the edge computing platform, the monitoring on safety of the manufactory with AI was completed effectively, which reduced the occurrence of safety accidents and improved the production efficiency of the manufactory.
+
+Based on this successful case, more deep learning algorithm will be deployed on KubeEdge to handle problems on edge computing. More cooperation about scenario-faced industrial intelligent application with KubeEdge will be carried out in the future.
diff --git a/src/pages/case-studies/XingHai/index.mdx b/src/pages/case-studies/XingHai/index.mdx
new file mode 100644
index 0000000000..28955d8761
--- /dev/null
+++ b/src/pages/case-studies/XingHai/index.mdx
@@ -0,0 +1,30 @@
+---
+date: 2024-05-27
+title: XingHai IoT
+subTitle:
+description: Xinghai IoT uses KubeEdge to build a smart campus with cloud-edge-device collaboration, which greatly improves campus management efficiency.
+tags:
+ - UserCase
+---
+
+# Building smart campuses based on KubeEdge
+
+## Challenge
+
+Xinghai IoT is an IoT company that provides comprehensive smart building solutions by leveraging a construction IoT platform, intelligent hardware, and AI. It is a creator and practitioner of smart campus standards for China Overseas Property Management and a core full-chain service provider of smart campus solutions from Huawei.
+
+The company serves its customers in 80 major cities in China and around the world. It has delivered 741 projects, covering more than 156 million square meters. Its business covers a diverse range of building types, such as high-end residential buildings, commercial complexes, super office buildings, government properties, and industrial parks.
+
+In recent years, as its business expands and occupant demands for service quality grow, Xinghai IoT has been committed to using edge computing and IoT to build sustainable smart campuses, improving efficiency for campus operations and management.
+
+## Highlights
+
+Xinghai IoT now offers services in a wide range of areas. Therefore, its solutions should be portable and replicable and need to ensure real-time data processing and secure data storage. KubeEdge, with services designed for cloud native development and edge-cloud synergy, has become an indispensable part of Xinghai IoT for building smart campuses.
+
+- Container images are built once to run anywhere, effectively reducing the deployment and O&M complexity of new campuses.
+- Edge-cloud synergy enables data to be processed at the edge, ensuring real-time performance and security and lowering network bandwidth costs.
+- KubeEdge makes adding hardware easy and supports common protocols. No secondary development is needed.
+
+## Benefits
+
+Xinghai IoT built a smart campus with cloud-edge-device synergy based on KubeEdge and its own Xinghai IoT cloud platform, greatly improving the efficiency of campus management. With AI assistance, nearly 30% of the repetitive work is automated. In the future, Xinghai IoT will continue to collaborate with KubeEdge to launch KubeEdge-based smart campus solutions.
\ No newline at end of file
diff --git a/src/pages/case-studies/jingying/index.mdx b/src/pages/case-studies/jingying/index.mdx
new file mode 100644
index 0000000000..3a0600fd79
--- /dev/null
+++ b/src/pages/case-studies/jingying/index.mdx
@@ -0,0 +1,33 @@
+---
+date: 2024-05-28
+title: Jingying Shuzhi Technology Co., Ltd
+subTitle:
+description: Jingying Shuzhi Technology Co., Ltd worked with KubeEdge to develop the Mine Brain solution, which covers the cloud, edge, and devices and makes coal production safer.
+tags:
+ - Solution
+---
+
+# Mining brain solution based on KubeEdge
+
+## Business Background
+
+Jingying Shuzhi Technology Co., Ltd focuses on providing security monitoring and management solutions for coal mining and gas enterprises. Their solutions cover reliable, stable data collection and transmission,
+on-site perception, risk prediction, and intelligent supervision to help these enterprises improve production security and reduce management costs.
+By leveraging AIoT and cloud-edge-device synergy, Jingying Shuzhi Technology Co., Ltd has built an intelligent sensing network for safe production in
+high-risk industries, promoting the in-depth integration of next-generation information technologies and safe production.
+
+## Highlights
+
+Jingying Shuzhi Technology Co., Ltd worked with KubeEdge to develop the Mine Brain solution, which covers the cloud, edge, and devices and makes coal production safer.
+This solution has the following advantages:
+
+- KubeEdge is compatible with the Kubernetes ecosystem. It allows Kubernetes applications to be smoothly migrated to KubeEdge, greatly improving deployment efficiency.
+- AI models are trained on the cloud and model inference is performed on the edge, greatly improving resource utilization and inference speed.
+- Service instances can recover automatically and run normally even if edge nodes are disconnected from the cloud, so the system is more reliable.
+- Edge intelligence, powerful computing, and management of a massive number of edge devices makes precise audio and video recognition possible for a range of different scenarios.
+With a foundation based on years of accumulated experience, Jingying Shuzhi Technology Co., Ltd has developed the ability to handle many AI scenarios and cloud-edge-device O&M, effectively ensuring reliable services and precise recognition.
+
+## Benefits
+
+Xinghai IoT built a smart campus with cloud-edge-device synergy based on KubeEdge and its own Xinghai IoT cloud platform, greatly improving the efficiency of campus management.
+With AI assistance, nearly 30% of the repetitive work is automated. In the future, Xinghai IoT will continue to collaborate with KubeEdge to launch KubeEdge-based smart campus solutions.
\ No newline at end of file
diff --git a/static/img/supporters/lookcan-logo.svg b/static/img/supporters/lookcan-logo.svg
new file mode 100644
index 0000000000..63358af707
--- /dev/null
+++ b/static/img/supporters/lookcan-logo.svg
@@ -0,0 +1,17 @@
+
diff --git a/versionsArchived.json b/versionsArchived.json
index 6daa34e043..13397aeeda 100644
--- a/versionsArchived.json
+++ b/versionsArchived.json
@@ -1,7 +1,7 @@
{
- "Next": "https://kubeedge.io/docs/",
+ "Next": "/docs/",
"v1.17": "https://release-1-17.docs.kubeedge.io/docs/",
"v1.16": "https://release-1-16.docs.kubeedge.io/docs/",
"v1.15": "https://release-1-15.docs.kubeedge.io/docs/",
"v1.14": "https://release-1-14.docs.kubeedge.io/docs/"
-}
+}
\ No newline at end of file