From c96e24ac04517ee6c95986fa3ad6670607d99e2a Mon Sep 17 00:00:00 2001
From: hyp4293 <429302517@qq.com>
Date: Wed, 22 May 2024 17:59:16 +0800
Subject: [PATCH 01/20] =?UTF-8?q?kubeedge=E9=83=A8=E7=BD=B2=E6=96=87?=
=?UTF-8?q?=E6=A1=A3=E5=86=85=E5=AE=B9=E4=BC=98=E5=8C=96=EF=BC=9A=E8=BE=B9?=
=?UTF-8?q?=E7=BC=98=E8=8A=82=E7=82=B9=E8=B5=84=E6=BA=90=E7=B4=A7=E5=BC=A0?=
=?UTF-8?q?=E6=97=A0=E6=B3=95=E6=AD=A3=E5=B8=B8=E9=83=A8=E7=BD=B2=E7=9A=84?=
=?UTF-8?q?=E9=97=AE=E9=A2=98=E5=A4=84=E7=90=86?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Signed-off-by: hyp4293 <429302517@qq.com>
---
docs/setup/install-with-keadm.md | 17 +++++++++++++++++
.../current/setup/install-with-keadm.md | 18 ++++++++++++++++++
2 files changed, 35 insertions(+)
diff --git a/docs/setup/install-with-keadm.md b/docs/setup/install-with-keadm.md
index f69e90e316..0119583c5d 100644
--- a/docs/setup/install-with-keadm.md
+++ b/docs/setup/install-with-keadm.md
@@ -97,6 +97,23 @@ keadm init --set server.advertiseAddress="THE-EXPOSED-IP" --set server.nodeName=
If you are familiar with the helm chart installation, please refer to [KubeEdge Helm Charts](https://github.com/kubeedge/kubeedge/tree/master/manifests/charts).
+**SPECIAL SCENARIO:**
+In the case of insufficient qualifications for edge nodes, we need to label them to prevent some applications from extending to edge nodes.
+
+```
+kubectl get daemonset -n kube-system |grep -v NAME |awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n kube-system --type='json' -p='[{"op": "replace","path": "/spec/template/spec/affinity","value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'
+
+```
+
+
+```
+kubectl get daemonset -n metallb-system |grep -v NAME |awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n metallb-system --type='json' -p='[{"op": "replace","path": "/spec/template/spec/affinity","value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'
+
+```
+
+Any daemonset cannot occupy the hardware resources of edge nodes.
+
+
### keadm manifest generate
You can also get the manifests with `keadm manifest generate`.
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/setup/install-with-keadm.md b/i18n/zh/docusaurus-plugin-content-docs/current/setup/install-with-keadm.md
index 13e3e7499e..65ac00517a 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/setup/install-with-keadm.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/setup/install-with-keadm.md
@@ -50,6 +50,24 @@ KubeEdge cloudcore is running, For logs visit: /var/log/kubeedge/cloudcore.log
当您看到以上信息,说明 KubeEdge 的云端组件 cloudcore 已经成功运行。
+**特殊场景:**
+边缘计算的硬件条件不好的情况,这里我们需要打上标签,让一些应用不扩展到edge节点上去。
+
+
+```
+kubectl get daemonset -n kube-system |grep -v NAME |awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n kube-system --type='json' -p='[{"op": "replace","path": "/spec/template/spec/affinity","value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'
+
+```
+
+
+```
+kubectl get daemonset -n metallb-system |grep -v NAME |awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n metallb-system --type='json' -p='[{"op": "replace","path": "/spec/template/spec/affinity","value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'
+
+```
+
+凡是daemonset的都不可以去占用edge节点的硬件资源。
+
+
### keadm beta init
如果您想要使用容器化方式部署云端组件 cloudcore ,您可以使用 `keadm beta init` 进行云端组件安装。
From 571e55cf9e299adf941579fd402b97ecde1c1bf7 Mon Sep 17 00:00:00 2001
From: hyp4293 <429302517@qq.com>
Date: Thu, 23 May 2024 14:59:49 +0800
Subject: [PATCH 02/20] Optimization of kubeedge deployment documentation:
Handling issues with insufficient edge node resources for normal deployment.
---
docs/setup/install-with-keadm.md | 13 +++----------
.../current/setup/install-with-keadm.md | 13 +++----------
2 files changed, 6 insertions(+), 20 deletions(-)
diff --git a/docs/setup/install-with-keadm.md b/docs/setup/install-with-keadm.md
index 0119583c5d..8b585bf53a 100644
--- a/docs/setup/install-with-keadm.md
+++ b/docs/setup/install-with-keadm.md
@@ -98,21 +98,14 @@ If you are familiar with the helm chart installation, please refer to [KubeEdge
**SPECIAL SCENARIO:**
-In the case of insufficient qualifications for edge nodes, we need to label them to prevent some applications from extending to edge nodes.
+In the case of insufficient qualifications for edge nodes, we need to label them to prevent some applications from extending to edge nodes. `Kube-proxy` and some others is not required at the edge.We can handle it accordingly.
```
kubectl get daemonset -n kube-system |grep -v NAME |awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n kube-system --type='json' -p='[{"op": "replace","path": "/spec/template/spec/affinity","value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'
```
-
-```
-kubectl get daemonset -n metallb-system |grep -v NAME |awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n metallb-system --type='json' -p='[{"op": "replace","path": "/spec/template/spec/affinity","value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'
-
-```
-
-Any daemonset cannot occupy the hardware resources of edge nodes.
-
+To handle kube-proxy, you can refer to the [two methods](#anchor-name) mentioned in the " Enable `kubectl logs` Feature " section of this document.
### keadm manifest generate
@@ -357,7 +350,7 @@ Before deploying metrics-server , `kubectl logs` feature must be activated:
```
If you fail to restart edgecore, check if that is because of `kube-proxy` and kill it. **kubeedge** reject it by default, we use a succedaneum called [edgemesh](https://github.com/kubeedge/kubeedge/blob/master/docs/proposals/edgemesh-design.md)
- **Note:** the importance is to avoid `kube-proxy` being deployed on edgenode. There are two methods to solve it:
+ **Note:** the importance is to avoid `kube-proxy` being deployed on edgenode. There are two methods to solve it:
1. Add the following settings by calling `kubectl edit daemonsets.apps -n kube-system kube-proxy`:
``` yaml
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/setup/install-with-keadm.md b/i18n/zh/docusaurus-plugin-content-docs/current/setup/install-with-keadm.md
index 65ac00517a..2f9d7ce967 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/setup/install-with-keadm.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/setup/install-with-keadm.md
@@ -51,7 +51,7 @@ KubeEdge cloudcore is running, For logs visit: /var/log/kubeedge/cloudcore.log
当您看到以上信息,说明 KubeEdge 的云端组件 cloudcore 已经成功运行。
**特殊场景:**
-边缘计算的硬件条件不好的情况,这里我们需要打上标签,让一些应用不扩展到edge节点上去。
+边缘计算的硬件条件不好的情况,这里我们需要打上标签,让一些应用不扩展到edge节点上去。 kube-proxy和其他的一些应用不是必须部署在边缘端,所以我们可以对他们进行处理。
```
@@ -59,14 +59,7 @@ kubectl get daemonset -n kube-system |grep -v NAME |awk '{print $1}' | xargs -n
```
-
-```
-kubectl get daemonset -n metallb-system |grep -v NAME |awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n metallb-system --type='json' -p='[{"op": "replace","path": "/spec/template/spec/affinity","value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'
-
-```
-
-凡是daemonset的都不可以去占用edge节点的硬件资源。
-
+如何处理kube-proxy,可以参考本文中在'启用 kubectl logs 功能'部分提到的 [2种方法](#anchor-name)
### keadm beta init
@@ -305,7 +298,7 @@ KubeEdge edgecore is running, For logs visit: /var/log/kubeedge/edgecore.log
如果您无法重启 edgecore,请检查是否是由于 `kube-proxy` 的缘故,同时杀死这个进程。 **kubeedge**
默认不纳入该进程,我们使用 [edgemesh](https://github.com/kubeedge/kubeedge/blob/master/docs/proposals/edgemesh-design.md) 来进行替代
- **注意:** 可以考虑避免 `kube-proxy` 部署在 edgenode 上。有两种解决方法:
+ **注意:** 可以考虑避免 `kube-proxy` 部署在 edgenode 上。有两种解决方法:
1. 通过调用 `kubectl edit daemonsets.apps -n kube-system kube-proxy` 添加以下设置:
From a29473215bebe365f92192ba87fcfa1c77384a2c Mon Sep 17 00:00:00 2001
From: hyp4293 <429302517@qq.com>
Date: Fri, 24 May 2024 17:13:29 +0800
Subject: [PATCH 03/20] =?UTF-8?q?=E9=95=9C=E5=83=8F=E9=A2=84=E5=8A=A0?=
=?UTF-8?q?=E8=BD=BD=E5=8A=9F=E8=83=BD=E6=8C=87=E5=AF=BC=E6=96=87=E6=A1=A3?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
---
.../Instruction-Document.md | 156 ++++++++++++++++++
1 file changed, 156 insertions(+)
create mode 100644 blog/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md
diff --git a/blog/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md b/blog/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md
new file mode 100644
index 0000000000..77c246f373
--- /dev/null
+++ b/blog/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md
@@ -0,0 +1,156 @@
+# KubeEdge 镜像预加载功能指导文档
+
+KubeEdge 1.16版本引入了镜像预下载新特性,用户可以通过ImagePrePullJob的Kubernetes API提前在边缘节点上加载镜像,该特性支持在批量边缘节点或节点组中预下载多个镜像,帮助减少加载镜像在应用部署或更新过程,尤其是大规模场景中,带来的失败率高、效率低下等问题。
+
+镜像预下载API示例:
+
+```
+apiVersion: operations.kubeedge.io/v1alpha1
+kind: ImagePrePullJob
+metadata:
+ name: imageprepull-example
+ labels:
+ description:ImagePrePullLabel
+spec:
+ imagePrePullTemplate:
+ images:
+ - image1
+ - image2
+ nodes:
+ - edgenode1
+ - edgenode2
+ checkItems:
+ - "disk"
+ failureTolerate: "0.3"
+ concurrency: 2
+ timeoutSeconds: 180
+ retryTimes: 1
+
+```
+
+
+## 1. 准备工作
+
+**选用示例:Nginx Demo**
+
+nginx是一个轻量级镜像,用户无需任何环境即可进行此演示。nginx镜像将会提前上传到一个私有镜像仓库中。用户可以从云端调用预加载功能API,将私有镜像仓库中的nginx镜像,提前下发到边缘节点中。
+
+
+**1)本示例要求KubeEdge版本必须是v1.16.0+,kubernetes版本是v1.27.0+,此次选择的版本是KubeEdge v1.16.0,Kubernetes版本是v1.27.3**
+
+```
+[root@ke-cloud ~]# kubectl get node
+NAME STATUS ROLES AGE VERSION
+cloud.kubeedge Ready control-plane,master 3d v1.27.3
+edge.kubeedge Ready agent,edge 2d v1.27.7-kubeedge-v1.16.0
+
+说明:本文接下来的验证将使用边缘节点edge.kubeedge进行,如果你参考本文进行相关验证,后续边缘节点名称的配置需要根据你的实际情况进行更改。
+```
+
+**2)确保k8s apiserver开启了以下配置:**
+
+
+```
+ taskManager:
+ enable: true // 由false修改为true
+```
+可以通过命令修改kubectl edit configmap cloudcore -n kubeedge文件,并重启k8s-apiserver组件的cloudcore来进行更改。
+
+**3)准备示例代码:**
+
+yaml文件示例代码
+```
+apiVersion: operations.kubeedge.io/v1alpha1
+kind: ImagePrePullJob
+metadata:
+ name: imageprepull-example
+spec:
+ imagePrePullTemplate:
+ concurrency: 1
+ failureTolerate: '0.1'
+ images:
+ - test:nginx
+ nodeNames:
+ - edge.kubeedge
+ imageSecrets: default/secret
+ retryTimes: 1
+ timeoutSeconds: 120
+
+```
+
+## 2. 准备私有镜像仓的镜像和Secret
+在这里准备了一个阿里云的私有镜像仓用作演示:registry.cn-hangzhou.aliyuncs.com/,使用的演示空间为jilimoxing。实际操作过程中可以依据真实情况进行修改
+
+**1)推送nginx进入私有镜像仓**
+```
+[root@cloud ~]# docker tag nginx registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx
+[root@cloud crds~]# docker push registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx
+```
+
+**2)在云端创建Secret**
+
+使用Kubectl create secret docker-registry生成私有镜像仓库的secret,根据你的实际情况来进行修改
+
+```
+[root@cloud ~]# kubectl create secret docker-registry my-secret \
+ --docker-server=registry.cn-hangzhou.aliyuncs.com \
+ --docker-username=23021*****@qq.com \
+ --docker-password=Xy***** \
+ --docker-email=23021*****@qq.com
+
+[root@cloud ~]# kubectl get secret -A
+NAMESPACE NAME TYPE DATA AGE
+default my-secret kubernetes.io/dockerconfigjson 1 31s
+
+```
+
+## 3. 创建Yaml文件
+
+**1)修改代码**
+
+在云端节点上创建yaml文件,需要修改对应的images信息以及imageSecrets信息,保持和所需要预加载的镜像仓库secret一致,如下所示:
+```
+
+[root@ke-cloud ~]# vim imageprepull.yaml
+
+apiVersion: operations.kubeedge.io/v1alpha1
+kind: ImagePrePullJob
+metadata:
+ name: imageprepull-example
+spec:
+ imagePrePullTemplate:
+ concurrency: 1
+ failureTolerate: '0.1'
+ images:
+ - registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx
+ nodeNames:
+ - edge.kubeedge
+ imageSecrets: default/my-secret
+ retryTimes: 1
+ timeoutSeconds: 120
+
+```
+
+**2)执行文件**
+
+
+```
+[root@ke-cloud ~]# kubectl apply -f imageprepull.yaml
+```
+
+
+## 4. 检查边缘节点镜像是否预加载成功
+
+进入边缘端,使用命令ctr -n k8s.io i ls进行查看
+```
+[root@edge ~]# ctr -n k8s.io i ls
+```
+找到对应的镜像已被预加载成功
+```
+REF TYPE DIGEST SIZE PLATFORMS LABELS
+registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx application/vnd.docker.distribution.manifest.v2+json sha256:73e957703f1266530db0aeac1fd6a3f87c1e59943f4c13eb340bb8521c6041d7 67.3 MiB linux/amd64
+```
+
+## 5. 其他
+
+**1)更多的KubeEdge官方示例请参考 https://github.com/kubeedge/examples**
\ No newline at end of file
From d75689d664f078b86bb2174ece04b2849bd619af Mon Sep 17 00:00:00 2001
From: Shubham Singh
Date: Wed, 15 May 2024 05:49:58 +0000
Subject: [PATCH 04/20] New Blog for Release KubeEdge v1.11
Signed-off-by: GitHub
Signed-off-by: hyp4293 <429302517@qq.com>
---
blog/release-v1.11/index.mdx | 138 +++++++++++++++++++++++++++++++++++
1 file changed, 138 insertions(+)
create mode 100644 blog/release-v1.11/index.mdx
diff --git a/blog/release-v1.11/index.mdx b/blog/release-v1.11/index.mdx
new file mode 100644
index 0000000000..64eb1ef59c
--- /dev/null
+++ b/blog/release-v1.11/index.mdx
@@ -0,0 +1,138 @@
+---
+authors:
+- KubeEdge SIG Release
+categories:
+- General
+- Announcements
+date: 2023-10-25
+draft: false
+lastmod: 2023-10-25
+summary: KubeEdge v1.11 is live!
+tags:
+- KubeEdge
+- kubeedge
+- edge computing
+- kubernetes edge computing
+- K8s edge orchestration
+- edge computing platform
+- cloud native
+- iot
+- iiot
+- release v1.11
+- v1.11
+title: KubeEdge v1.11 is live!
+---
+
+On October 21, 2023, KubeEdge released v1.11, introducing several exciting new features and enhancements that significantly improve node group management, mapper development, installation experience, and overall stability.
+
+## v1.11 What's New
+
+- [Node Group Management](#node-group-management)
+- [Mapper SDK](#mapper-sdk)
+- [Beta sub-commands in Keadm to GA](#beta-sub-commands-in-keadm-to-ga)
+- [Deprecation of original `init` and `join`](#deprecation-of-original-init-and-join)
+- [Next-gen Edged to Beta: Suitable for more scenarios](#next-gen-edged-to-beta-suitable-for-more-scenarios)
+
+## Release Highlights
+
+### Node Group Management
+
+Users can now deploy applications to several node groups without writing deployment for every group. Node group management helps users to:
+
+- Manage nodes in groups
+
+- Spread apps among node groups
+
+- Run different versions of app instances in different node groups
+
+- Limit service endpoints in the same location as the client
+
+Two new APIs have been introduced to implement Node Group Management:
+
+- **NodeGroup API**: represents a group of nodes that have the same labels.
+- **EdgeApplication API**: contains the template of the application organized by node groups, and the information on how to deploy different editions of the application to different node groups.
+
+Refer to the links for more details ([#3574](https://github.com/kubeedge/kubeedge/pull/3574), [#3719](https://github.com/kubeedge/kubeedge/pull/3719)).
+
+### Mapper SDK
+
+Mapper-sdk is a basic framework written in Go. Based on this framework, developers can more easily implement a new mapper. Mapper-sdk has realized the connection to KubeEdge, provides data conversion, and manages the basic properties and status of devices, etc., as well as basic capabilities and abstract definition of the driver interface. Developers only need to implement the customized protocol driver interface of the corresponding device to realize the function of mapper.
+
+Refer to the link for more details ([#70](https://github.com/kubeedge/mappers-go/pull/70)).
+
+### Beta sub-commands in Keadm to GA
+
+Some new sub-commands in Keadm have moved to GA, including containerized deployment, offline installation, etc. The original `init` and `join` behaviors have been replaced by the implementation from `beta init` and `beta join`:
+
+- CloudCore will be running in containers and managed by Kubernetes Deployment by default.
+- Keadm now downloads releases that are packed as container images to edge nodes for node setup.
+
+- `init`: CloudCore Helm Chart is integrated into `init`, which can be used to deploy containerized CloudCore.
+
+- `join`: Installing edgecore as a system service from a Docker image, no need to download from the GitHub release.
+
+- `reset`: Reset the node, clean up the resources installed on the node by `init` or `join`. It will automatically detect the type of node to clean up.
+
+- `manifest generate`: Generate all the manifests to deploy the cloud-side components.
+
+Refer to the link for more details ([#3900](https://github.com/kubeedge/kubeedge/pull/3900)).
+
+### Deprecation of original `init` and `join`
+
+The original `init` and `join` sub-commands have been deprecated as they had issues with offline installation, etc.
+
+Refer to the link for more details ([#3900](https://github.com/kubeedge/kubeedge/pull/3900)).
+
+### Next-gen Edged to Beta: Suitable for more scenarios
+
+The new version of the lightweight engine Edged, optimized from Kubelet and integrated into edgecore, has moved to Beta. The new Edged will still communicate with the cloud through the reliable transmission tunnel.
+
+Refer to the link for more details (Dev-Branch for beta: [feature-new-edged](https://github.com/kubeedge/kubeedge/tree/feature-new-edged)).
+
+## Important Steps before Upgrading
+
+If you want to use Keadm to deploy KubeEdge v1.11.0, please note that the behaviors of the `init` and `join` sub-commands have been changed.
+
+## Other Notable Changes
+
+- Add custom image repo for keadm join beta ([#3654](https://github.com/kubeedge/kubeedge/pull/3654))
+
+- Keadm: beta join support remote runtime ([#3655](https://github.com/kubeedge/kubeedge/pull/3655))
+
+- Use sync mode to update pod status ([#3658](https://github.com/kubeedge/kubeedge/pull/3658))
+
+- Make log level configurable for local up kubeedge ([#3664](https://github.com/kubeedge/kubeedge/pull/3664))
+
+- Use dependency to pull images ([#3671](https://github.com/kubeedge/kubeedge/pull/3671))
+
+- Move apis and client under kubeedge/cloud/pkg/ to kubeedge/pkg/ ([#3683](https://github.com/kubeedge/kubeedge/pull/3683))
+
+- Add subresource field in application for API with subresource ([#3693](https://github.com/kubeedge/kubeedge/pull/3693))
+
+- Add Keadm beta e2e ([#3699](https://github.com/kubeedge/kubeedge/pull/3699))
+
+- Keadm beta config images: support remote runtime ([#3700](https://github.com/kubeedge/kubeedge/pull/3700))
+
+- Use unified image management ([#3720](https://github.com/kubeedge/kubeedge/pull/3720))
+
+- Use armhf as default for armv7/v6 ([#3723](https://github.com/kubeedge/kubeedge/pull/3723))
+
+- Add ErrStatus in api-server application ([#3742](https://github.com/kubeedge/kubeedge/pull/3742))
+
+- Support compile binaries with kubeedge/build-tools image ([#3756](https://github.com/kubeedge/kubeedge/pull/3756))
+
+- Add min TLS version for stream server ([#3764](https://github.com/kubeedge/kubeedge/pull/3764))
+
+- Adding security policy ([#3778](https://github.com/kubeedge/kubeedge/pull/3778))
+
+- Chart: add cert domain config in helm chart ([#3802](https://github.com/kubeedge/kubeedge/pull/3802))
+
+- Add domain support for certgen.sh ([#3808](https://github.com/kubeedge/kubeedge/pull/3808))
+
+- Remove default KubeConfig for cloudcore ([#3836](https://github.com/kubeedge/kubeedge/pull/3836))
+
+- Helm: Allow annotation of the cloudcore service ([#3856](https://github.com/kubeedge/kubeedge/pull/3856))
+
+- Add rate limiter for edgehub ([#3862](https://github.com/kubeedge/kubeedge/pull/3862))
+
+- Sync pod status immediately when status update ([#3891](https://github.com/kubeedge/kubeedge/pull/3891))
\ No newline at end of file
From 214001a2f03556a86c251cfe44ca1db3f1ea6274 Mon Sep 17 00:00:00 2001
From: Shubham Singh
Date: Thu, 16 May 2024 12:11:20 +0000
Subject: [PATCH 05/20] fixed the release date
Signed-off-by: GitHub
Signed-off-by: hyp4293 <429302517@qq.com>
---
blog/release-v1.11/index.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/blog/release-v1.11/index.mdx b/blog/release-v1.11/index.mdx
index 64eb1ef59c..33bee64caa 100644
--- a/blog/release-v1.11/index.mdx
+++ b/blog/release-v1.11/index.mdx
@@ -23,7 +23,7 @@ tags:
title: KubeEdge v1.11 is live!
---
-On October 21, 2023, KubeEdge released v1.11, introducing several exciting new features and enhancements that significantly improve node group management, mapper development, installation experience, and overall stability.
+On Jun 21, 2022 KubeEdge released v1.11, introducing several exciting new features and enhancements that significantly improve node group management, mapper development, installation experience, and overall stability.
## v1.11 What's New
From b5121694bedfc3d69754e1a67ed0b42162a4e239 Mon Sep 17 00:00:00 2001
From: Shubham Singh
Date: Thu, 16 May 2024 11:48:03 +0000
Subject: [PATCH 06/20] New Blog for Release KubeEdge v1.11
Signed-off-by: GitHub
Signed-off-by: hyp4293 <429302517@qq.com>
---
blog/release-v1.13/index.mdx | 104 +++++++++++++++++++++++++++++++++++
1 file changed, 104 insertions(+)
create mode 100644 blog/release-v1.13/index.mdx
diff --git a/blog/release-v1.13/index.mdx b/blog/release-v1.13/index.mdx
new file mode 100644
index 0000000000..0f79532618
--- /dev/null
+++ b/blog/release-v1.13/index.mdx
@@ -0,0 +1,104 @@
+---
+authors:
+- KubeEdge SIG Release
+categories:
+- General
+- Announcements
+date: 2023-01-23
+draft: false
+lastmod: 2023-01-23
+summary: KubeEdge v1.13 is live!
+tags:
+- KubeEdge
+- kubeedge
+- edge computing
+- kubernetes edge computing
+- K8s edge orchestration
+- edge computing platform
+- cloud native
+- iot
+- iiot
+- release v1.13
+- v1.13
+title: KubeEdge v1.13 is live!
+---
+
+On Jan 18, 2023, KubeEdge released v1.13. The new version introduces several enhanced features, significantly improving performance, security, and edge device management.
+
+## v1.13 What's New
+
+- [Performance Improvement](#performance-improvement)
+
+- [Security Improvement](#security-improvement)
+
+- [Upgrade Kubernetes Dependency to v1.23.15](#upgrade-kubernetes-dependency-to-v12315)
+
+- [Modbus Mapper based on DMI](#modbus-mapper-based-on-dmi)
+
+- [Support Rolling Upgrade for Edge Nodes from Cloud](#support-rolling-upgrade-for-edge-nodes-from-cloud)
+
+- [Test Runner for conformance test](#test-runner-for-conformance-test)
+
+- [EdgeMesh: Added configurable field TunnelLimitConfig to edge-tunnel module](#edgemesh-added-configurable-field-tunnellimitconfig-to-edge-tunnel-module)
+
+### Performance Improvement
+
+- **CloudCore memory usage is reduced by 40%**, through unified generic Informer and reduce unnecessary cache. ([#4375](https://github.com/kubeedge/kubeedge/pull/4375), [#4377](https://github.com/kubeedge/kubeedge/pull/4377))
+
+- List-watch dynamicController processing optimization, each watcher has a separate channel and goroutine processing to improve processing efficiency ([#4506](https://github.com/kubeedge/kubeedge/pull/4506))
+
+- Added list-watch synchronization mechanism between cloud and edge and add dynamicController watch gc mechanism ([#4484](https://github.com/kubeedge/kubeedge/pull/4484))
+
+- Removed 10s hard delay when offline nodes turn online ([#4490](https://github.com/kubeedge/kubeedge/pull/4490))
+
+- Added prometheus monitor server and a metric connected_nodes to cloudHub. This metric tallies the number of connected nodes each cloudhub instance ([#3646](https://github.com/kubeedge/kubeedge/pull/3646))
+
+- Added pprof for visualization and analysis of profiling data ([#3646](https://github.com/kubeedge/kubeedge/pull/3646))
+
+- CloudCore configuration is now automatically adjusted according to nodeLimit to adapt to the number of nodes of different scales ([#4376](https://github.com/kubeedge/kubeedge/pull/4376))
+
+### Security Improvement
+
+- KubeEdge is proud to announce that we are digitally signing all release artifacts (including binary artifacts and container images). Signing artifacts provides end users a chance to verify the integrity of the downloaded resource. It allows to mitigate man-in-the-middle attacks directly on the client side and therefore ensures the trustfulness of the remote serving the artifacts. By doing this, we reached the SLSA security assessment level L3 ([#4285](https://github.com/kubeedge/kubeedge/pull/4285))
+
+- Remove the token field in the edge node configuration file edgecore.yaml to eliminate the risk of edge information leakage ([#4488](https://github.com/kubeedge/kubeedge/pull/4488))
+
+### Upgrade Kubernetes Dependency to v1.23.15
+
+Upgrade the vendered kubernetes version to v1.23.15, users are now able to use the feature of new version on the cloud and on the edge side.
+
+Refer to the link for more details. ([#4509](https://github.com/kubeedge/kubeedge/pull/4509))
+
+### Modbus Mapper based on DMI
+
+Modbus Device Mapper based on DMI is provided, which is used to access Modbus protocol devices and uses DMI to synchronize the management plane messages of devices with edgecore.
+
+Refer to the link for more details. ([mappers-go#79](https://github.com/kubeedge/mappers-go/pull/79))
+
+### Support Rolling Upgrade for Edge Nodes from Cloud
+
+Users now able to trigger rolling upgrade for edge nodes from cloud, and specify number of concurrent upgrade nodes with `nodeupgradejob.spec.concurrency`. The default Concurrency value is 1, which means upgrade edge nodes one by one.
+
+Refer to the link for more details. ([#4476](https://github.com/kubeedge/kubeedge/pull/4476))
+
+### Test Runner for conformance test
+
+KubeEdge has provided the runner of the conformance test, which contains the scripts and related files of the conformance test.
+
+Refer to the link for more details. ([#4411](https://github.com/kubeedge/kubeedge/pull/4411))
+
+### EdgeMesh: Added configurable field TunnelLimitConfig to edge-tunnel module
+
+The tunnel stream of the edge-tunnel module is used to manage the data stream state of the tunnel. Users can obtain a stable and configurable tunnel stream to ensure the reliability of user application traffic forwarding.
+
+Users can configure the cache size of tunnel stream according to `TunnelLimitConfig` to support larger application relay traffic.
+
+Refer to the link for more details. ([#399](https://github.com/kubeedge/edgemesh/pull/399))
+
+Cancel the restrictions on the relay to ensure the stability of the user's streaming application or long link application.
+
+Refer to the link for more details. ([#400](https://github.com/kubeedge/edgemesh/pull/400))
+
+## Important Steps before Upgrading
+
+- EdgeCore now uses `containerd` runtime by default on KubeEdge v1.13. If you want to use `docker` runtime, you must set `edged.containerRuntime=docker` and corresponding docker configuration like `DockerEndpoint`, `RemoteRuntimeEndpoint` and `RemoteImageEndpoint` in EdgeCore.
\ No newline at end of file
From cd9db44a5fd5134b0a72bcff22df5b06a2d6e2ec Mon Sep 17 00:00:00 2001
From: Shubham Singh
Date: Thu, 16 May 2024 11:16:52 +0000
Subject: [PATCH 07/20] New Blog for Release KubeEdge v1.14
Signed-off-by: GitHub
Signed-off-by: hyp4293 <429302517@qq.com>
---
blog/release-v1.14/index.mdx | 80 ++++++++++++++++++++++++++++++++++++
1 file changed, 80 insertions(+)
create mode 100644 blog/release-v1.14/index.mdx
diff --git a/blog/release-v1.14/index.mdx b/blog/release-v1.14/index.mdx
new file mode 100644
index 0000000000..59c3452fc0
--- /dev/null
+++ b/blog/release-v1.14/index.mdx
@@ -0,0 +1,80 @@
+---
+authors:
+- KubeEdge SIG Release
+categories:
+- General
+- Announcements
+date: 2023-05-15
+draft: false
+lastmod: 2023-05-15
+summary: KubeEdge v1.14 is live!
+tags:
+- KubeEdge
+- kubeedge
+- edge computing
+- kubernetes edge computing
+- K8s edge orchestration
+- edge computing platform
+- cloud native
+- iot
+- iiot
+- release v1.14
+- v1.14
+title: KubeEdge v1.14 is live!
+---
+
+On May 15, 2023, KubeEdge released v1.14. The new version introduces several enhanced features, significantly improving security, reliability, and user experience.
+
+## v1.14 What's New
+
+- [Support Authentication and Authorization for Kube-API Endpoint for Applications On Edge Nodes](#support-authentication-and-authorization-for-kube-api-endpoint-for-applications-on-edge-nodes)
+
+- [Support Cluster Scope Resource Reliable Delivery to Edge Node](#support-cluster-scope-resource-reliable-delivery-to-edge-node)
+
+- [Upgrade Kubernetes Dependency to v1.24.14](#upgrade-kubernetes-dependency-to-v12414)
+
+- [Support Kubectl Attach to Container Running on Edge Node](#support-kubectl-attach-to-container-running-on-edge-node)
+
+- [Alpha version of KubeEdge Dashboard](#alpha-version-of-kubeedge-dashboard)
+
+## Release Highlights
+
+### Support Authentication and Authorization for Kube-API Endpoint for Applications On Edge Nodes
+
+The Kube-API endpoint for edge applications is implemented through MetaServer in edegcore. However, in previous versions, the authentication and authorization of Kube-API endpoint are performed in the cloud, which prevents authentication and authorization especially in offline scenarios on the edge node.
+
+In this release, the authentication and authorization functionalities are implemented within the MetaServer at edge, which allows for limiting the access permissions of edge applications when accessing Kube-API endpoint at edge.
+
+Refer to the link for more details. ([#4802](https://github.com/kubeedge/kubeedge/pull/4802))
+
+### Support Cluster Scope Resource Reliable Delivery to Edge Node
+
+The cluster scope resource can guarantee deliver to the edge side reliably since this release, especially include using list-watch global resources, the cluster scope resource can be delivered to the edge side reliably, and the edge applications can work normally.
+
+Refer to the link for more details. ([#4758](https://github.com/kubeedge/kubeedge/pull/4758))
+
+### Upgrade Kubernetes Dependency to v1.24.14
+
+Upgrade the vendered kubernetes version to v1.24.14, users are now able to use the feature of new version on the cloud and on the edge side.
+
+:::note
+The dockershim has been removed, which means users can't use docker runtime directly in this release.
+:::
+
+Refer to the link for more details. ([#4789](https://github.com/kubeedge/kubeedge/pull/4789))
+
+### Support Kubectl Attach to Container Running on Edge Node
+
+KubeEdge already support `kubectl logs/exe` command, `kubectl attach` is supported in this release. `kubectl attach` command can attach to a running container at edge node. Users can execute these commands in the cloud and no need to operate on the edge nodes.
+
+Refer to the link for more details. ([#4734](https://github.com/kubeedge/kubeedge/pull/4734))
+
+### Alpha version of KubeEdge Dashboard
+
+KubeEdge dashboard provides a graphical user interface (GUI) for managing and monitoring your KubeEdge clusters. It allows users to manage edge applications running in the cluster and troubleshoot them.
+
+Refer to the link for more details. (https://github.com/kubeedge/dashboard)
+
+## Important Steps before Upgrading
+
+- On KubeEdge v1.14, EdgeCore has removed the dockeshim support, so users can only use `remote` type runtime, and uses `containerd` runtime by default. If you want to use `docker` runtime, you must first set `edged.containerRuntime=remote` and corresponding docker configuration like `RemoteRuntimeEndpoint` and `RemoteImageEndpoint` in EdgeCore, then install the cri-dockerd tools as docs below: https://github.com/kubeedge/kubeedge/issues/4843
\ No newline at end of file
From 755bcee2d437e358dfa6dd641ea29c6e584e2e7f Mon Sep 17 00:00:00 2001
From: Shubham Singh
Date: Thu, 16 May 2024 11:57:44 +0000
Subject: [PATCH 08/20] added the release date
Signed-off-by: GitHub
Signed-off-by: hyp4293 <429302517@qq.com>
---
blog/release-v1.14/index.mdx | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/blog/release-v1.14/index.mdx b/blog/release-v1.14/index.mdx
index 59c3452fc0..02d94c30b2 100644
--- a/blog/release-v1.14/index.mdx
+++ b/blog/release-v1.14/index.mdx
@@ -23,7 +23,7 @@ tags:
title: KubeEdge v1.14 is live!
---
-On May 15, 2023, KubeEdge released v1.14. The new version introduces several enhanced features, significantly improving security, reliability, and user experience.
+On July 1, 2023, KubeEdge released v1.14. The new version introduces several enhanced features, significantly improving security, reliability, and user experience.
## v1.14 What's New
From 11c7d68e498b1cfdf7ea78c4748d8242b8b86172 Mon Sep 17 00:00:00 2001
From: fisherxu
Date: Mon, 20 May 2024 21:09:57 +0800
Subject: [PATCH 09/20] update version for release 1.17
Signed-off-by: fisherxu
Signed-off-by: hyp4293 <429302517@qq.com>
---
versionsArchived.json | 6 +++---
1 file changed, 3 insertions(+), 3 deletions(-)
diff --git a/versionsArchived.json b/versionsArchived.json
index 0e48ad73d9..6daa34e043 100644
--- a/versionsArchived.json
+++ b/versionsArchived.json
@@ -1,7 +1,7 @@
{
"Next": "https://kubeedge.io/docs/",
+ "v1.17": "https://release-1-17.docs.kubeedge.io/docs/",
+ "v1.16": "https://release-1-16.docs.kubeedge.io/docs/",
"v1.15": "https://release-1-15.docs.kubeedge.io/docs/",
- "v1.14": "https://release-1-14.docs.kubeedge.io/docs/",
- "v1.13": "https://release-1-13.docs.kubeedge.io/docs/",
- "v1.12": "https://release-1-12.docs.kubeedge.io/en/docs/"
+ "v1.14": "https://release-1-14.docs.kubeedge.io/docs/"
}
From 18d7615ae6b85f68f0187007785257de4a9e08de Mon Sep 17 00:00:00 2001
From: hyp4293 <429302517@qq.com>
Date: Wed, 22 May 2024 17:59:16 +0800
Subject: [PATCH 10/20] =?UTF-8?q?kubeedge=E9=83=A8=E7=BD=B2=E6=96=87?=
=?UTF-8?q?=E6=A1=A3=E5=86=85=E5=AE=B9=E4=BC=98=E5=8C=96=EF=BC=9A=E8=BE=B9?=
=?UTF-8?q?=E7=BC=98=E8=8A=82=E7=82=B9=E8=B5=84=E6=BA=90=E7=B4=A7=E5=BC=A0?=
=?UTF-8?q?=E6=97=A0=E6=B3=95=E6=AD=A3=E5=B8=B8=E9=83=A8=E7=BD=B2=E7=9A=84?=
=?UTF-8?q?=E9=97=AE=E9=A2=98=E5=A4=84=E7=90=86?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Signed-off-by: hyp4293 <429302517@qq.com>
---
docs/setup/install-with-keadm.md | 17 +++++++++++++++++
.../current/setup/install-with-keadm.md | 18 ++++++++++++++++++
2 files changed, 35 insertions(+)
diff --git a/docs/setup/install-with-keadm.md b/docs/setup/install-with-keadm.md
index f69e90e316..0119583c5d 100644
--- a/docs/setup/install-with-keadm.md
+++ b/docs/setup/install-with-keadm.md
@@ -97,6 +97,23 @@ keadm init --set server.advertiseAddress="THE-EXPOSED-IP" --set server.nodeName=
If you are familiar with the helm chart installation, please refer to [KubeEdge Helm Charts](https://github.com/kubeedge/kubeedge/tree/master/manifests/charts).
+**SPECIAL SCENARIO:**
+In the case of insufficient qualifications for edge nodes, we need to label them to prevent some applications from extending to edge nodes.
+
+```
+kubectl get daemonset -n kube-system |grep -v NAME |awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n kube-system --type='json' -p='[{"op": "replace","path": "/spec/template/spec/affinity","value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'
+
+```
+
+
+```
+kubectl get daemonset -n metallb-system |grep -v NAME |awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n metallb-system --type='json' -p='[{"op": "replace","path": "/spec/template/spec/affinity","value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'
+
+```
+
+Any daemonset cannot occupy the hardware resources of edge nodes.
+
+
### keadm manifest generate
You can also get the manifests with `keadm manifest generate`.
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/setup/install-with-keadm.md b/i18n/zh/docusaurus-plugin-content-docs/current/setup/install-with-keadm.md
index 13e3e7499e..65ac00517a 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/setup/install-with-keadm.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/setup/install-with-keadm.md
@@ -50,6 +50,24 @@ KubeEdge cloudcore is running, For logs visit: /var/log/kubeedge/cloudcore.log
当您看到以上信息,说明 KubeEdge 的云端组件 cloudcore 已经成功运行。
+**特殊场景:**
+边缘计算的硬件条件不好的情况,这里我们需要打上标签,让一些应用不扩展到edge节点上去。
+
+
+```
+kubectl get daemonset -n kube-system |grep -v NAME |awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n kube-system --type='json' -p='[{"op": "replace","path": "/spec/template/spec/affinity","value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'
+
+```
+
+
+```
+kubectl get daemonset -n metallb-system |grep -v NAME |awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n metallb-system --type='json' -p='[{"op": "replace","path": "/spec/template/spec/affinity","value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'
+
+```
+
+凡是daemonset的都不可以去占用edge节点的硬件资源。
+
+
### keadm beta init
如果您想要使用容器化方式部署云端组件 cloudcore ,您可以使用 `keadm beta init` 进行云端组件安装。
From 345a54065ee6b60f9c51f4f1a70c0805493649fa Mon Sep 17 00:00:00 2001
From: hyp4293 <429302517@qq.com>
Date: Thu, 23 May 2024 14:59:49 +0800
Subject: [PATCH 11/20] Optimization of kubeedge deployment documentation:
Handling issues with insufficient edge node resources for normal deployment.
Signed-off-by: hyp4293 <429302517@qq.com>
---
docs/setup/install-with-keadm.md | 13 +++----------
.../current/setup/install-with-keadm.md | 13 +++----------
2 files changed, 6 insertions(+), 20 deletions(-)
diff --git a/docs/setup/install-with-keadm.md b/docs/setup/install-with-keadm.md
index 0119583c5d..8b585bf53a 100644
--- a/docs/setup/install-with-keadm.md
+++ b/docs/setup/install-with-keadm.md
@@ -98,21 +98,14 @@ If you are familiar with the helm chart installation, please refer to [KubeEdge
**SPECIAL SCENARIO:**
-In the case of insufficient qualifications for edge nodes, we need to label them to prevent some applications from extending to edge nodes.
+In the case of insufficient qualifications for edge nodes, we need to label them to prevent some applications from extending to edge nodes. `Kube-proxy` and some others is not required at the edge.We can handle it accordingly.
```
kubectl get daemonset -n kube-system |grep -v NAME |awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n kube-system --type='json' -p='[{"op": "replace","path": "/spec/template/spec/affinity","value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'
```
-
-```
-kubectl get daemonset -n metallb-system |grep -v NAME |awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n metallb-system --type='json' -p='[{"op": "replace","path": "/spec/template/spec/affinity","value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'
-
-```
-
-Any daemonset cannot occupy the hardware resources of edge nodes.
-
+To handle kube-proxy, you can refer to the [two methods](#anchor-name) mentioned in the " Enable `kubectl logs` Feature " section of this document.
### keadm manifest generate
@@ -357,7 +350,7 @@ Before deploying metrics-server , `kubectl logs` feature must be activated:
```
If you fail to restart edgecore, check if that is because of `kube-proxy` and kill it. **kubeedge** reject it by default, we use a succedaneum called [edgemesh](https://github.com/kubeedge/kubeedge/blob/master/docs/proposals/edgemesh-design.md)
- **Note:** the importance is to avoid `kube-proxy` being deployed on edgenode. There are two methods to solve it:
+ **Note:** the importance is to avoid `kube-proxy` being deployed on edgenode. There are two methods to solve it:
1. Add the following settings by calling `kubectl edit daemonsets.apps -n kube-system kube-proxy`:
``` yaml
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/setup/install-with-keadm.md b/i18n/zh/docusaurus-plugin-content-docs/current/setup/install-with-keadm.md
index 65ac00517a..2f9d7ce967 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/setup/install-with-keadm.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/setup/install-with-keadm.md
@@ -51,7 +51,7 @@ KubeEdge cloudcore is running, For logs visit: /var/log/kubeedge/cloudcore.log
当您看到以上信息,说明 KubeEdge 的云端组件 cloudcore 已经成功运行。
**特殊场景:**
-边缘计算的硬件条件不好的情况,这里我们需要打上标签,让一些应用不扩展到edge节点上去。
+边缘计算的硬件条件不好的情况,这里我们需要打上标签,让一些应用不扩展到edge节点上去。 kube-proxy和其他的一些应用不是必须部署在边缘端,所以我们可以对他们进行处理。
```
@@ -59,14 +59,7 @@ kubectl get daemonset -n kube-system |grep -v NAME |awk '{print $1}' | xargs -n
```
-
-```
-kubectl get daemonset -n metallb-system |grep -v NAME |awk '{print $1}' | xargs -n 1 kubectl patch daemonset -n metallb-system --type='json' -p='[{"op": "replace","path": "/spec/template/spec/affinity","value":{"nodeAffinity":{"requiredDuringSchedulingIgnoredDuringExecution":{"nodeSelectorTerms":[{"matchExpressions":[{"key":"node-role.kubernetes.io/edge","operator":"DoesNotExist"}]}]}}}}]'
-
-```
-
-凡是daemonset的都不可以去占用edge节点的硬件资源。
-
+如何处理kube-proxy,可以参考本文中在'启用 kubectl logs 功能'部分提到的 [2种方法](#anchor-name)
### keadm beta init
@@ -305,7 +298,7 @@ KubeEdge edgecore is running, For logs visit: /var/log/kubeedge/edgecore.log
如果您无法重启 edgecore,请检查是否是由于 `kube-proxy` 的缘故,同时杀死这个进程。 **kubeedge**
默认不纳入该进程,我们使用 [edgemesh](https://github.com/kubeedge/kubeedge/blob/master/docs/proposals/edgemesh-design.md) 来进行替代
- **注意:** 可以考虑避免 `kube-proxy` 部署在 edgenode 上。有两种解决方法:
+ **注意:** 可以考虑避免 `kube-proxy` 部署在 edgenode 上。有两种解决方法:
1. 通过调用 `kubectl edit daemonsets.apps -n kube-system kube-proxy` 添加以下设置:
From 4af8dd128c569fdbae8e8868a5a776824b32d166 Mon Sep 17 00:00:00 2001
From: hyp4293 <429302517@qq.com>
Date: Fri, 24 May 2024 17:13:29 +0800
Subject: [PATCH 12/20] =?UTF-8?q?=E9=95=9C=E5=83=8F=E9=A2=84=E5=8A=A0?=
=?UTF-8?q?=E8=BD=BD=E5=8A=9F=E8=83=BD=E6=8C=87=E5=AF=BC=E6=96=87=E6=A1=A3?=
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
Signed-off-by: hyp4293 <429302517@qq.com>
---
.../Instruction-Document.md | 156 ++++++++++++++++++
1 file changed, 156 insertions(+)
create mode 100644 blog/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md
diff --git a/blog/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md b/blog/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md
new file mode 100644
index 0000000000..77c246f373
--- /dev/null
+++ b/blog/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md
@@ -0,0 +1,156 @@
+# KubeEdge 镜像预加载功能指导文档
+
+KubeEdge 1.16版本引入了镜像预下载新特性,用户可以通过ImagePrePullJob的Kubernetes API提前在边缘节点上加载镜像,该特性支持在批量边缘节点或节点组中预下载多个镜像,帮助减少加载镜像在应用部署或更新过程,尤其是大规模场景中,带来的失败率高、效率低下等问题。
+
+镜像预下载API示例:
+
+```
+apiVersion: operations.kubeedge.io/v1alpha1
+kind: ImagePrePullJob
+metadata:
+ name: imageprepull-example
+ labels:
+ description:ImagePrePullLabel
+spec:
+ imagePrePullTemplate:
+ images:
+ - image1
+ - image2
+ nodes:
+ - edgenode1
+ - edgenode2
+ checkItems:
+ - "disk"
+ failureTolerate: "0.3"
+ concurrency: 2
+ timeoutSeconds: 180
+ retryTimes: 1
+
+```
+
+
+## 1. 准备工作
+
+**选用示例:Nginx Demo**
+
+nginx是一个轻量级镜像,用户无需任何环境即可进行此演示。nginx镜像将会提前上传到一个私有镜像仓库中。用户可以从云端调用预加载功能API,将私有镜像仓库中的nginx镜像,提前下发到边缘节点中。
+
+
+**1)本示例要求KubeEdge版本必须是v1.16.0+,kubernetes版本是v1.27.0+,此次选择的版本是KubeEdge v1.16.0,Kubernetes版本是v1.27.3**
+
+```
+[root@ke-cloud ~]# kubectl get node
+NAME STATUS ROLES AGE VERSION
+cloud.kubeedge Ready control-plane,master 3d v1.27.3
+edge.kubeedge Ready agent,edge 2d v1.27.7-kubeedge-v1.16.0
+
+说明:本文接下来的验证将使用边缘节点edge.kubeedge进行,如果你参考本文进行相关验证,后续边缘节点名称的配置需要根据你的实际情况进行更改。
+```
+
+**2)确保k8s apiserver开启了以下配置:**
+
+
+```
+ taskManager:
+ enable: true // 由false修改为true
+```
+可以通过命令修改kubectl edit configmap cloudcore -n kubeedge文件,并重启k8s-apiserver组件的cloudcore来进行更改。
+
+**3)准备示例代码:**
+
+yaml文件示例代码
+```
+apiVersion: operations.kubeedge.io/v1alpha1
+kind: ImagePrePullJob
+metadata:
+ name: imageprepull-example
+spec:
+ imagePrePullTemplate:
+ concurrency: 1
+ failureTolerate: '0.1'
+ images:
+ - test:nginx
+ nodeNames:
+ - edge.kubeedge
+ imageSecrets: default/secret
+ retryTimes: 1
+ timeoutSeconds: 120
+
+```
+
+## 2. 准备私有镜像仓的镜像和Secret
+在这里准备了一个阿里云的私有镜像仓用作演示:registry.cn-hangzhou.aliyuncs.com/,使用的演示空间为jilimoxing。实际操作过程中可以依据真实情况进行修改
+
+**1)推送nginx进入私有镜像仓**
+```
+[root@cloud ~]# docker tag nginx registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx
+[root@cloud crds~]# docker push registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx
+```
+
+**2)在云端创建Secret**
+
+使用Kubectl create secret docker-registry生成私有镜像仓库的secret,根据你的实际情况来进行修改
+
+```
+[root@cloud ~]# kubectl create secret docker-registry my-secret \
+ --docker-server=registry.cn-hangzhou.aliyuncs.com \
+ --docker-username=23021*****@qq.com \
+ --docker-password=Xy***** \
+ --docker-email=23021*****@qq.com
+
+[root@cloud ~]# kubectl get secret -A
+NAMESPACE NAME TYPE DATA AGE
+default my-secret kubernetes.io/dockerconfigjson 1 31s
+
+```
+
+## 3. 创建Yaml文件
+
+**1)修改代码**
+
+在云端节点上创建yaml文件,需要修改对应的images信息以及imageSecrets信息,保持和所需要预加载的镜像仓库secret一致,如下所示:
+```
+
+[root@ke-cloud ~]# vim imageprepull.yaml
+
+apiVersion: operations.kubeedge.io/v1alpha1
+kind: ImagePrePullJob
+metadata:
+ name: imageprepull-example
+spec:
+ imagePrePullTemplate:
+ concurrency: 1
+ failureTolerate: '0.1'
+ images:
+ - registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx
+ nodeNames:
+ - edge.kubeedge
+ imageSecrets: default/my-secret
+ retryTimes: 1
+ timeoutSeconds: 120
+
+```
+
+**2)执行文件**
+
+
+```
+[root@ke-cloud ~]# kubectl apply -f imageprepull.yaml
+```
+
+
+## 4. 检查边缘节点镜像是否预加载成功
+
+进入边缘端,使用命令ctr -n k8s.io i ls进行查看
+```
+[root@edge ~]# ctr -n k8s.io i ls
+```
+找到对应的镜像已被预加载成功
+```
+REF TYPE DIGEST SIZE PLATFORMS LABELS
+registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx application/vnd.docker.distribution.manifest.v2+json sha256:73e957703f1266530db0aeac1fd6a3f87c1e59943f4c13eb340bb8521c6041d7 67.3 MiB linux/amd64
+```
+
+## 5. 其他
+
+**1)更多的KubeEdge官方示例请参考 https://github.com/kubeedge/examples**
\ No newline at end of file
From 1712f6a621e64188d8daa8d66b9bdaaa0e96196c Mon Sep 17 00:00:00 2001
From: Shubham Singh
Date: Fri, 17 May 2024 13:30:49 +0000
Subject: [PATCH 13/20] Replacing Twitter with X
Signed-off-by: GitHub
Signed-off-by: hyp4293 <429302517@qq.com>
---
docusaurus.config.js | 2 +-
src/css/custom.css | 4 ++--
static/img/twitter.svg | 5 -----
static/img/x.svg | 3 +++
4 files changed, 6 insertions(+), 8 deletions(-)
delete mode 100644 static/img/twitter.svg
create mode 100644 static/img/x.svg
diff --git a/docusaurus.config.js b/docusaurus.config.js
index d70a32f88b..2fbefafa79 100644
--- a/docusaurus.config.js
+++ b/docusaurus.config.js
@@ -182,7 +182,7 @@ const config = {
{
href: "https://twitter.com/KubeEdge",
position: "right",
- className: "header-twitter-link heade-icon",
+ className: "header-x-link heade-icon",
},
{
to: "/docs/community/slack",
diff --git a/src/css/custom.css b/src/css/custom.css
index 56a84d0ae1..44d1362b91 100644
--- a/src/css/custom.css
+++ b/src/css/custom.css
@@ -41,12 +41,12 @@
no-repeat;
}
-.header-twitter-link::before {
+.header-x-link::before {
content: "";
width: 25px;
height: 25px;
display: flex;
- background: url("/img/twitter.svg") no-repeat;
+ background: url("/img/x.svg") no-repeat;
}
.header-slack-link::before {
diff --git a/static/img/twitter.svg b/static/img/twitter.svg
deleted file mode 100644
index 7d2bd6f998..0000000000
--- a/static/img/twitter.svg
+++ /dev/null
@@ -1,5 +0,0 @@
-
diff --git a/static/img/x.svg b/static/img/x.svg
new file mode 100644
index 0000000000..76ae86f95f
--- /dev/null
+++ b/static/img/x.svg
@@ -0,0 +1,3 @@
+
\ No newline at end of file
From d59f6bc0a005ad90156f6f3f14658f24e7806dbe Mon Sep 17 00:00:00 2001
From: hyp4293 <429302517@qq.com>
Date: Sat, 25 May 2024 20:44:04 +0800
Subject: [PATCH 14/20] KubeEdge Image PrePull Feature Guide Document
Signed-off-by: hyp4293 <429302517@qq.com>
---
.../Instruction-Document.md | 157 ++++++++++++++++++
.../Instruction-Document.md | 156 +++++++++++++++++
2 files changed, 313 insertions(+)
create mode 100644 docs/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md
create mode 100644 i18n/zh/docusaurus-plugin-content-docs/current/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md
diff --git a/docs/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md b/docs/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md
new file mode 100644
index 0000000000..1dba83fa07
--- /dev/null
+++ b/docs/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md
@@ -0,0 +1,157 @@
+# KubeEdge Image PrePull Feature Guide Document
+
+KubeEdge version 1.16 introduces a new feature called Image Pre-Pull, which allows users to load images ahead of time on edge nodes through the Kubernetes API of ImagePrePullJob. This feature supports pre-pull multiple images in batches across multiple edge nodes or node groups, helping to reduce the failure rates and inefficiencies associated with loading images during application deployment or updates, especially in large-scale scenarios.
+
+API example for pre-pull mirror image:
+
+```
+apiVersion: operations.kubeedge.io/v1alpha1
+kind: ImagePrePullJob
+metadata:
+ name: imageprepull-example
+ labels:
+ description:ImagePrePullLabel
+spec:
+ imagePrePullTemplate:
+ images:
+ - image1
+ - image2
+ nodes:
+ - edgenode1
+ - edgenode2
+ checkItems:
+ - "disk"
+ failureTolerate: "0.3"
+ concurrency: 2
+ timeoutSeconds: 180
+ retryTimes: 1
+
+```
+
+
+## 1. Preparation
+
+**Example:Nginx Demo**
+
+Nginx is a lightweight image that allows users to demonstrate it without any prerequisite environment. Nginx image will be uploaded to a private image repository in advance. Users can call the pre-pull function API from the cloud to pre-pull the Nginx image to edge nodes from the private image repository.
+
+**1)This example requires KubeEdge version to be v1.16.0 or above, and Kubernetes version to be v1.27.0 or above. The selected version is KubeEdge v1.16.0 and Kubernetes version is v1.27.3.**
+
+```
+[root@ke-cloud ~]# kubectl get node
+NAME STATUS ROLES AGE VERSION
+cloud.kubeedge Ready control-plane,master 3d v1.27.3
+edge.kubeedge Ready agent,edge 2d v1.27.7-kubeedge-v1.16.0
+
+Note: The following verification will use the edge node edge.kubeedge. If you refer to this article for related verification, the configuration of the edge node name in subsequent steps needs to be changed according to your actual situation.
+```
+
+**2)Ensure that the K8s apiserver has the following configuration enabled**
+
+
+```
+ taskManager:
+ enable: true // 由false修改为true
+```
+changes can be made by editing the file kubectl edit configmap cloudcore -n kubeedge with commands, and restarting the cloudcore component of the K8s apiserver.
+
+**3)Preparing Sample Code **
+
+yaml file example code
+```
+apiVersion: operations.kubeedge.io/v1alpha1
+kind: ImagePrePullJob
+metadata:
+ name: imageprepull-example
+spec:
+ imagePrePullTemplate:
+ concurrency: 1
+ failureTolerate: '0.1'
+ images:
+ - test:nginx
+ nodeNames:
+ - edge.kubeedge
+ imageSecrets: default/secret
+ retryTimes: 1
+ timeoutSeconds: 120
+
+```
+
+## 2. Prepare the image and Secret for the private image repository
+Here is a private image repository prepared for demonstration purposes using Alibaba Cloud's registry URL: registry.cn-hangzhou.aliyuncs.com. The demo space used is jilimoxing, and modifications may be necessary based on actual circumstances during the actual operation.
+
+**1)Pushing nginx into the private image repository**
+
+```
+[root@cloud ~]# docker tag nginx registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx
+[root@cloud crds~]# docker push registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx
+```
+
+**2)Create a Secret on the cloud**
+
+Using Kubectl create secret docker-registry to generate a secret for a private image repository, modify it according to your actual situation.
+
+```
+[root@cloud ~]# kubectl create secret docker-registry my-secret \
+ --docker-server=registry.cn-hangzhou.aliyuncs.com \
+ --docker-username=23021*****@qq.com \
+ --docker-password=Xy***** \
+ --docker-email=23021*****@qq.com
+
+[root@cloud ~]# kubectl get secret -A
+NAMESPACE NAME TYPE DATA AGE
+default my-secret kubernetes.io/dockerconfigjson 1 31s
+
+```
+
+## 3. Create Yaml File
+
+**1)Modify Code**
+
+To create a yaml file on a cloud node, you need to modify the corresponding images information and imageSecrets information to keep them consistent with the pre-pull image repository secret. The information should be as follows:
+```
+
+[root@ke-cloud ~]# vim imageprepull.yaml
+
+apiVersion: operations.kubeedge.io/v1alpha1
+kind: ImagePrePullJob
+metadata:
+ name: imageprepull-example
+spec:
+ imagePrePullTemplate:
+ concurrency: 1
+ failureTolerate: '0.1'
+ images:
+ - registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx
+ nodeNames:
+ - edge.kubeedge
+ imageSecrets: default/my-secret
+ retryTimes: 1
+ timeoutSeconds: 120
+
+```
+
+**2)executable file**
+
+
+```
+[root@ke-cloud ~]# kubectl apply -f imageprepull.yaml
+```
+
+
+## 4. Check if the edge node image has been pre-pull successfully
+
+Enter the edge terminal and use the command ctr -n k8s.io i ls to view.
+```
+[root@edge ~]# ctr -n k8s.io i ls
+```
+The corresponding image has been successfully pre-pull.
+```
+REF TYPE DIGEST SIZE PLATFORMS LABELS
+registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx application/vnd.docker.distribution.manifest.v2+json sha256:73e957703f1266530db0aeac1fd6a3f87c1e59943f4c13eb340bb8521c6041d7 67.3 MiB linux/amd64
+```
+
+## 5. Other
+
+**1)For more official KubeEdge examples, please refer to https://github.com/kubeedge/examples**
+
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md b/i18n/zh/docusaurus-plugin-content-docs/current/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md
new file mode 100644
index 0000000000..77c246f373
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md
@@ -0,0 +1,156 @@
+# KubeEdge 镜像预加载功能指导文档
+
+KubeEdge 1.16版本引入了镜像预下载新特性,用户可以通过ImagePrePullJob的Kubernetes API提前在边缘节点上加载镜像,该特性支持在批量边缘节点或节点组中预下载多个镜像,帮助减少加载镜像在应用部署或更新过程,尤其是大规模场景中,带来的失败率高、效率低下等问题。
+
+镜像预下载API示例:
+
+```
+apiVersion: operations.kubeedge.io/v1alpha1
+kind: ImagePrePullJob
+metadata:
+ name: imageprepull-example
+ labels:
+ description:ImagePrePullLabel
+spec:
+ imagePrePullTemplate:
+ images:
+ - image1
+ - image2
+ nodes:
+ - edgenode1
+ - edgenode2
+ checkItems:
+ - "disk"
+ failureTolerate: "0.3"
+ concurrency: 2
+ timeoutSeconds: 180
+ retryTimes: 1
+
+```
+
+
+## 1. 准备工作
+
+**选用示例:Nginx Demo**
+
+nginx是一个轻量级镜像,用户无需任何环境即可进行此演示。nginx镜像将会提前上传到一个私有镜像仓库中。用户可以从云端调用预加载功能API,将私有镜像仓库中的nginx镜像,提前下发到边缘节点中。
+
+
+**1)本示例要求KubeEdge版本必须是v1.16.0+,kubernetes版本是v1.27.0+,此次选择的版本是KubeEdge v1.16.0,Kubernetes版本是v1.27.3**
+
+```
+[root@ke-cloud ~]# kubectl get node
+NAME STATUS ROLES AGE VERSION
+cloud.kubeedge Ready control-plane,master 3d v1.27.3
+edge.kubeedge Ready agent,edge 2d v1.27.7-kubeedge-v1.16.0
+
+说明:本文接下来的验证将使用边缘节点edge.kubeedge进行,如果你参考本文进行相关验证,后续边缘节点名称的配置需要根据你的实际情况进行更改。
+```
+
+**2)确保k8s apiserver开启了以下配置:**
+
+
+```
+ taskManager:
+ enable: true // 由false修改为true
+```
+可以通过命令修改kubectl edit configmap cloudcore -n kubeedge文件,并重启k8s-apiserver组件的cloudcore来进行更改。
+
+**3)准备示例代码:**
+
+yaml文件示例代码
+```
+apiVersion: operations.kubeedge.io/v1alpha1
+kind: ImagePrePullJob
+metadata:
+ name: imageprepull-example
+spec:
+ imagePrePullTemplate:
+ concurrency: 1
+ failureTolerate: '0.1'
+ images:
+ - test:nginx
+ nodeNames:
+ - edge.kubeedge
+ imageSecrets: default/secret
+ retryTimes: 1
+ timeoutSeconds: 120
+
+```
+
+## 2. 准备私有镜像仓的镜像和Secret
+在这里准备了一个阿里云的私有镜像仓用作演示:registry.cn-hangzhou.aliyuncs.com/,使用的演示空间为jilimoxing。实际操作过程中可以依据真实情况进行修改
+
+**1)推送nginx进入私有镜像仓**
+```
+[root@cloud ~]# docker tag nginx registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx
+[root@cloud crds~]# docker push registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx
+```
+
+**2)在云端创建Secret**
+
+使用Kubectl create secret docker-registry生成私有镜像仓库的secret,根据你的实际情况来进行修改
+
+```
+[root@cloud ~]# kubectl create secret docker-registry my-secret \
+ --docker-server=registry.cn-hangzhou.aliyuncs.com \
+ --docker-username=23021*****@qq.com \
+ --docker-password=Xy***** \
+ --docker-email=23021*****@qq.com
+
+[root@cloud ~]# kubectl get secret -A
+NAMESPACE NAME TYPE DATA AGE
+default my-secret kubernetes.io/dockerconfigjson 1 31s
+
+```
+
+## 3. 创建Yaml文件
+
+**1)修改代码**
+
+在云端节点上创建yaml文件,需要修改对应的images信息以及imageSecrets信息,保持和所需要预加载的镜像仓库secret一致,如下所示:
+```
+
+[root@ke-cloud ~]# vim imageprepull.yaml
+
+apiVersion: operations.kubeedge.io/v1alpha1
+kind: ImagePrePullJob
+metadata:
+ name: imageprepull-example
+spec:
+ imagePrePullTemplate:
+ concurrency: 1
+ failureTolerate: '0.1'
+ images:
+ - registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx
+ nodeNames:
+ - edge.kubeedge
+ imageSecrets: default/my-secret
+ retryTimes: 1
+ timeoutSeconds: 120
+
+```
+
+**2)执行文件**
+
+
+```
+[root@ke-cloud ~]# kubectl apply -f imageprepull.yaml
+```
+
+
+## 4. 检查边缘节点镜像是否预加载成功
+
+进入边缘端,使用命令ctr -n k8s.io i ls进行查看
+```
+[root@edge ~]# ctr -n k8s.io i ls
+```
+找到对应的镜像已被预加载成功
+```
+REF TYPE DIGEST SIZE PLATFORMS LABELS
+registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx application/vnd.docker.distribution.manifest.v2+json sha256:73e957703f1266530db0aeac1fd6a3f87c1e59943f4c13eb340bb8521c6041d7 67.3 MiB linux/amd64
+```
+
+## 5. 其他
+
+**1)更多的KubeEdge官方示例请参考 https://github.com/kubeedge/examples**
\ No newline at end of file
From 52ac1e3415a5c51677ef0a4557239f010583a8e0 Mon Sep 17 00:00:00 2001
From: hyp4293 <429302517@qq.com>
Date: Wed, 12 Jun 2024 23:43:44 +0800
Subject: [PATCH 15/20] KubeEdge Image PrePull Feature Guide Document
Signed-off-by: hyp4293 <429302517@qq.com>
---
.../Instruction-Document.md | 56 +++++++------------
.../Instruction-Document.md | 50 ++++++-----------
2 files changed, 39 insertions(+), 67 deletions(-)
rename docs/{Image-PrePull-Feature-Enhancement-Instruction-Document => advanced}/Instruction-Document.md (74%)
rename i18n/zh/docusaurus-plugin-content-docs/current/{Image-PrePull-Feature-Enhancement-Instruction-Document => advanced}/Instruction-Document.md (76%)
diff --git a/docs/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md b/docs/advanced/Instruction-Document.md
similarity index 74%
rename from docs/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md
rename to docs/advanced/Instruction-Document.md
index 1dba83fa07..cfc247b32c 100644
--- a/docs/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md
+++ b/docs/advanced/Instruction-Document.md
@@ -2,7 +2,7 @@
KubeEdge version 1.16 introduces a new feature called Image Pre-Pull, which allows users to load images ahead of time on edge nodes through the Kubernetes API of ImagePrePullJob. This feature supports pre-pull multiple images in batches across multiple edge nodes or node groups, helping to reduce the failure rates and inefficiencies associated with loading images during application deployment or updates, especially in large-scale scenarios.
-API example for pre-pull mirror image:
+API example for ImagePrePullJob:
```
apiVersion: operations.kubeedge.io/v1alpha1
@@ -10,7 +10,7 @@ kind: ImagePrePullJob
metadata:
name: imageprepull-example
labels:
- description:ImagePrePullLabel
+ description: ImagePrePullLabel
spec:
imagePrePullTemplate:
images:
@@ -43,41 +43,22 @@ NAME STATUS ROLES AGE VERSION
cloud.kubeedge Ready control-plane,master 3d v1.27.3
edge.kubeedge Ready agent,edge 2d v1.27.7-kubeedge-v1.16.0
-Note: The following verification will use the edge node edge.kubeedge. If you refer to this article for related verification, the configuration of the edge node name in subsequent steps needs to be changed according to your actual situation.
+Note: The following operations will use the edge node edge.kubeedge. If you refer to this document for related operations, the configuration of the edge node name in subsequent steps needs to be changed according to your actual situation.
```
-**2)Ensure that the K8s apiserver has the following configuration enabled**
+**2)Ensure that the CloudCore has the following configuration enabled**
```
taskManager:
- enable: true // 由false修改为true
+ enable: true // Change from false to true
```
changes can be made by editing the file kubectl edit configmap cloudcore -n kubeedge with commands, and restarting the cloudcore component of the K8s apiserver.
-**3)Preparing Sample Code **
-yaml file example code
-```
-apiVersion: operations.kubeedge.io/v1alpha1
-kind: ImagePrePullJob
-metadata:
- name: imageprepull-example
-spec:
- imagePrePullTemplate:
- concurrency: 1
- failureTolerate: '0.1'
- images:
- - test:nginx
- nodeNames:
- - edge.kubeedge
- imageSecrets: default/secret
- retryTimes: 1
- timeoutSeconds: 120
-```
-## 2. Prepare the image and Secret for the private image repository
+## 2. Prepare the Secret for the privare image
Here is a private image repository prepared for demonstration purposes using Alibaba Cloud's registry URL: registry.cn-hangzhou.aliyuncs.com. The demo space used is jilimoxing, and modifications may be necessary based on actual circumstances during the actual operation.
**1)Pushing nginx into the private image repository**
@@ -88,15 +69,15 @@ Here is a private image repository prepared for demonstration purposes using Ali
```
**2)Create a Secret on the cloud**
-
-Using Kubectl create secret docker-registry to generate a secret for a private image repository, modify it according to your actual situation.
+Secret is not a required field in ImagePrePullJob. If you need to prepull private image, you can generate a secret for it.
+You can also use kubectl to create a Secret for accessing a container registry,such as when you don`t have a Docker configuration file:
```
[root@cloud ~]# kubectl create secret docker-registry my-secret \
- --docker-server=registry.cn-hangzhou.aliyuncs.com \
- --docker-username=23021*****@qq.com \
- --docker-password=Xy***** \
- --docker-email=23021*****@qq.com
+ --docker-server=tiger@acme.example \
+ --docker-username=tiger \
+ --docker-password=pass1234 \
+ --docker-email=my-registry.example:5000
[root@cloud ~]# kubectl get secret -A
NAMESPACE NAME TYPE DATA AGE
@@ -139,6 +120,15 @@ spec:
```
+**3) Get ImagePrepulljob Status**
+
+use:kubectl get imageprepulljobs.operations.kubeedge.io imageprepull-example -o jsonpath='{.status}'
+
+```
+[root@ke-cloud ~]# kubectl get imageprepulljobs.operations.kubeedge.io imageprepull-example -o jsonpath='{.status}'
+[root@ke-cloud ~]# {"action":"Success","event":"Pull","state":"Successful","status":[{"imageStatus":[{"image":"registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx","state":"Successful"}],"nodeStatus":{"action":"Success","event":"Pull","nodeName":"edge.kubeedge","state":"Successful","time":"2024-04-26T18:51:41Z"}}],"time":"2024-04-26T18:51:41Z"}
+```
+
## 4. Check if the edge node image has been pre-pull successfully
Enter the edge terminal and use the command ctr -n k8s.io i ls to view.
@@ -151,7 +141,3 @@ REF TYPE
registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx application/vnd.docker.distribution.manifest.v2+json sha256:73e957703f1266530db0aeac1fd6a3f87c1e59943f4c13eb340bb8521c6041d7 67.3 MiB linux/amd64
```
-## 5. Other
-
-**1)For more official KubeEdge examples, please refer to https://github.com/kubeedge/examples**
-
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/Instruction-Document.md
similarity index 76%
rename from i18n/zh/docusaurus-plugin-content-docs/current/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md
rename to i18n/zh/docusaurus-plugin-content-docs/current/advanced/Instruction-Document.md
index 77c246f373..5305f0275e 100644
--- a/i18n/zh/docusaurus-plugin-content-docs/current/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md
+++ b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/Instruction-Document.md
@@ -44,10 +44,10 @@ NAME STATUS ROLES AGE VERSION
cloud.kubeedge Ready control-plane,master 3d v1.27.3
edge.kubeedge Ready agent,edge 2d v1.27.7-kubeedge-v1.16.0
-说明:本文接下来的验证将使用边缘节点edge.kubeedge进行,如果你参考本文进行相关验证,后续边缘节点名称的配置需要根据你的实际情况进行更改。
+说明:本文接下来的操作将使用边缘节点edge.kubeedge进行,如果你参考本文进行相关操作,后续边缘节点名称的配置需要根据你的实际情况进行更改。
```
-**2)确保k8s apiserver开启了以下配置:**
+**2)确保CloudCore开启了以下配置:**
```
@@ -56,29 +56,10 @@ edge.kubeedge Ready agent,edge 2d v1.27.7-kubeedge-v1.16.0
```
可以通过命令修改kubectl edit configmap cloudcore -n kubeedge文件,并重启k8s-apiserver组件的cloudcore来进行更改。
-**3)准备示例代码:**
-yaml文件示例代码
-```
-apiVersion: operations.kubeedge.io/v1alpha1
-kind: ImagePrePullJob
-metadata:
- name: imageprepull-example
-spec:
- imagePrePullTemplate:
- concurrency: 1
- failureTolerate: '0.1'
- images:
- - test:nginx
- nodeNames:
- - edge.kubeedge
- imageSecrets: default/secret
- retryTimes: 1
- timeoutSeconds: 120
-```
-## 2. 准备私有镜像仓的镜像和Secret
+## 2. 为私有镜像准备密钥
在这里准备了一个阿里云的私有镜像仓用作演示:registry.cn-hangzhou.aliyuncs.com/,使用的演示空间为jilimoxing。实际操作过程中可以依据真实情况进行修改
**1)推送nginx进入私有镜像仓**
@@ -88,15 +69,15 @@ spec:
```
**2)在云端创建Secret**
-
-使用Kubectl create secret docker-registry生成私有镜像仓库的secret,根据你的实际情况来进行修改
+Secret 不是 ImagePrePullJob 中的必填字段。如果你需要预拉私有镜像,你可以为它生成一个密钥。
+您还可以使用kubectl创建一个用于访问docker registry的Secret,例如在没有Docker配置文件的情况下:
```
[root@cloud ~]# kubectl create secret docker-registry my-secret \
- --docker-server=registry.cn-hangzhou.aliyuncs.com \
- --docker-username=23021*****@qq.com \
- --docker-password=Xy***** \
- --docker-email=23021*****@qq.com
+ --docker-server=tiger@acme.example \
+ --docker-username=tiger \
+ --docker-password=pass1234 \
+ --docker-email=my-registry.example:5000
[root@cloud ~]# kubectl get secret -A
NAMESPACE NAME TYPE DATA AGE
@@ -138,6 +119,15 @@ spec:
[root@ke-cloud ~]# kubectl apply -f imageprepull.yaml
```
+**3) 获取 ImagePrepulljob 的状态**
+
+使用命令:kubectl get imageprepulljobs.operations.kubeedge.io imageprepull-example -o jsonpath='{.status}'进行查看
+
+```
+[root@ke-cloud ~]# kubectl get imageprepulljobs.operations.kubeedge.io imageprepull-example -o jsonpath='{.status}'
+[root@ke-cloud ~]# {"action":"Success","event":"Pull","state":"Successful","status":[{"imageStatus":[{"image":"registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx","state":"Successful"}],"nodeStatus":{"action":"Success","event":"Pull","nodeName":"edge.kubeedge","state":"Successful","time":"2024-04-26T18:51:41Z"}}],"time":"2024-04-26T18:51:41Z"}
+```
+
## 4. 检查边缘节点镜像是否预加载成功
@@ -150,7 +140,3 @@ spec:
REF TYPE DIGEST SIZE PLATFORMS LABELS
registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx application/vnd.docker.distribution.manifest.v2+json sha256:73e957703f1266530db0aeac1fd6a3f87c1e59943f4c13eb340bb8521c6041d7 67.3 MiB linux/amd64
```
-
-## 5. 其他
-
-**1)更多的KubeEdge官方示例请参考 https://github.com/kubeedge/examples**
\ No newline at end of file
From bf2a75e264ece5940a2335f3bd2bad3cf33e9fad Mon Sep 17 00:00:00 2001
From: hyp4293 <429302517@qq.com>
Date: Sat, 6 Jul 2024 14:03:08 +0800
Subject: [PATCH 16/20] 6/7/2024
Signed-off-by: hyp4293 <429302517@qq.com>
---
.../Instruction-Document.md | 156 ------------------
...struction-Document.md => image-prepull.md} | 8 +-
...struction-Document.md => image-prepull.md} | 0
3 files changed, 7 insertions(+), 157 deletions(-)
delete mode 100644 blog/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md
rename docs/advanced/{Instruction-Document.md => image-prepull.md} (97%)
rename i18n/zh/docusaurus-plugin-content-docs/current/advanced/{Instruction-Document.md => image-prepull.md} (100%)
diff --git a/blog/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md b/blog/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md
deleted file mode 100644
index 77c246f373..0000000000
--- a/blog/Image-PrePull-Feature-Enhancement-Instruction-Document/Instruction-Document.md
+++ /dev/null
@@ -1,156 +0,0 @@
-# KubeEdge 镜像预加载功能指导文档
-
-KubeEdge 1.16版本引入了镜像预下载新特性,用户可以通过ImagePrePullJob的Kubernetes API提前在边缘节点上加载镜像,该特性支持在批量边缘节点或节点组中预下载多个镜像,帮助减少加载镜像在应用部署或更新过程,尤其是大规模场景中,带来的失败率高、效率低下等问题。
-
-镜像预下载API示例:
-
-```
-apiVersion: operations.kubeedge.io/v1alpha1
-kind: ImagePrePullJob
-metadata:
- name: imageprepull-example
- labels:
- description:ImagePrePullLabel
-spec:
- imagePrePullTemplate:
- images:
- - image1
- - image2
- nodes:
- - edgenode1
- - edgenode2
- checkItems:
- - "disk"
- failureTolerate: "0.3"
- concurrency: 2
- timeoutSeconds: 180
- retryTimes: 1
-
-```
-
-
-## 1. 准备工作
-
-**选用示例:Nginx Demo**
-
-nginx是一个轻量级镜像,用户无需任何环境即可进行此演示。nginx镜像将会提前上传到一个私有镜像仓库中。用户可以从云端调用预加载功能API,将私有镜像仓库中的nginx镜像,提前下发到边缘节点中。
-
-
-**1)本示例要求KubeEdge版本必须是v1.16.0+,kubernetes版本是v1.27.0+,此次选择的版本是KubeEdge v1.16.0,Kubernetes版本是v1.27.3**
-
-```
-[root@ke-cloud ~]# kubectl get node
-NAME STATUS ROLES AGE VERSION
-cloud.kubeedge Ready control-plane,master 3d v1.27.3
-edge.kubeedge Ready agent,edge 2d v1.27.7-kubeedge-v1.16.0
-
-说明:本文接下来的验证将使用边缘节点edge.kubeedge进行,如果你参考本文进行相关验证,后续边缘节点名称的配置需要根据你的实际情况进行更改。
-```
-
-**2)确保k8s apiserver开启了以下配置:**
-
-
-```
- taskManager:
- enable: true // 由false修改为true
-```
-可以通过命令修改kubectl edit configmap cloudcore -n kubeedge文件,并重启k8s-apiserver组件的cloudcore来进行更改。
-
-**3)准备示例代码:**
-
-yaml文件示例代码
-```
-apiVersion: operations.kubeedge.io/v1alpha1
-kind: ImagePrePullJob
-metadata:
- name: imageprepull-example
-spec:
- imagePrePullTemplate:
- concurrency: 1
- failureTolerate: '0.1'
- images:
- - test:nginx
- nodeNames:
- - edge.kubeedge
- imageSecrets: default/secret
- retryTimes: 1
- timeoutSeconds: 120
-
-```
-
-## 2. 准备私有镜像仓的镜像和Secret
-在这里准备了一个阿里云的私有镜像仓用作演示:registry.cn-hangzhou.aliyuncs.com/,使用的演示空间为jilimoxing。实际操作过程中可以依据真实情况进行修改
-
-**1)推送nginx进入私有镜像仓**
-```
-[root@cloud ~]# docker tag nginx registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx
-[root@cloud crds~]# docker push registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx
-```
-
-**2)在云端创建Secret**
-
-使用Kubectl create secret docker-registry生成私有镜像仓库的secret,根据你的实际情况来进行修改
-
-```
-[root@cloud ~]# kubectl create secret docker-registry my-secret \
- --docker-server=registry.cn-hangzhou.aliyuncs.com \
- --docker-username=23021*****@qq.com \
- --docker-password=Xy***** \
- --docker-email=23021*****@qq.com
-
-[root@cloud ~]# kubectl get secret -A
-NAMESPACE NAME TYPE DATA AGE
-default my-secret kubernetes.io/dockerconfigjson 1 31s
-
-```
-
-## 3. 创建Yaml文件
-
-**1)修改代码**
-
-在云端节点上创建yaml文件,需要修改对应的images信息以及imageSecrets信息,保持和所需要预加载的镜像仓库secret一致,如下所示:
-```
-
-[root@ke-cloud ~]# vim imageprepull.yaml
-
-apiVersion: operations.kubeedge.io/v1alpha1
-kind: ImagePrePullJob
-metadata:
- name: imageprepull-example
-spec:
- imagePrePullTemplate:
- concurrency: 1
- failureTolerate: '0.1'
- images:
- - registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx
- nodeNames:
- - edge.kubeedge
- imageSecrets: default/my-secret
- retryTimes: 1
- timeoutSeconds: 120
-
-```
-
-**2)执行文件**
-
-
-```
-[root@ke-cloud ~]# kubectl apply -f imageprepull.yaml
-```
-
-
-## 4. 检查边缘节点镜像是否预加载成功
-
-进入边缘端,使用命令ctr -n k8s.io i ls进行查看
-```
-[root@edge ~]# ctr -n k8s.io i ls
-```
-找到对应的镜像已被预加载成功
-```
-REF TYPE DIGEST SIZE PLATFORMS LABELS
-registry.cn-hangzhou.aliyuncs.com/jilimoxing/test:nginx application/vnd.docker.distribution.manifest.v2+json sha256:73e957703f1266530db0aeac1fd6a3f87c1e59943f4c13eb340bb8521c6041d7 67.3 MiB linux/amd64
-```
-
-## 5. 其他
-
-**1)更多的KubeEdge官方示例请参考 https://github.com/kubeedge/examples**
\ No newline at end of file
diff --git a/docs/advanced/Instruction-Document.md b/docs/advanced/image-prepull.md
similarity index 97%
rename from docs/advanced/Instruction-Document.md
rename to docs/advanced/image-prepull.md
index cfc247b32c..503ffdc749 100644
--- a/docs/advanced/Instruction-Document.md
+++ b/docs/advanced/image-prepull.md
@@ -1,3 +1,9 @@
+---
+title: KubeEdge Image PrePull Feature Guide Document
+sidebar_position: 6
+---
+
+
# KubeEdge Image PrePull Feature Guide Document
KubeEdge version 1.16 introduces a new feature called Image Pre-Pull, which allows users to load images ahead of time on edge nodes through the Kubernetes API of ImagePrePullJob. This feature supports pre-pull multiple images in batches across multiple edge nodes or node groups, helping to reduce the failure rates and inefficiencies associated with loading images during application deployment or updates, especially in large-scale scenarios.
@@ -58,7 +64,7 @@ changes can be made by editing the file kubectl edit configmap cloudcore -n kube
-## 2. Prepare the Secret for the privare image
+## 2. Prepare the Secret for the privare image (optional)
Here is a private image repository prepared for demonstration purposes using Alibaba Cloud's registry URL: registry.cn-hangzhou.aliyuncs.com. The demo space used is jilimoxing, and modifications may be necessary based on actual circumstances during the actual operation.
**1)Pushing nginx into the private image repository**
diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/advanced/Instruction-Document.md b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/image-prepull.md
similarity index 100%
rename from i18n/zh/docusaurus-plugin-content-docs/current/advanced/Instruction-Document.md
rename to i18n/zh/docusaurus-plugin-content-docs/current/advanced/image-prepull.md
From 3bb5a8467f62c3e97657df3df7edd10a03464c4f Mon Sep 17 00:00:00 2001
From: wbc6080
Date: Mon, 27 May 2024 15:34:36 +0800
Subject: [PATCH 17/20] add case study about raisecom tech
Signed-off-by: wbc6080
Signed-off-by: hyp4293 <429302517@qq.com>
---
.../case-studies/Raisecom-Tech/index.mdx | 26 +++++++++++++++++++
.../case-studies/Raisecom-Tech/index.mdx | 24 +++++++++++++++++
2 files changed, 50 insertions(+)
create mode 100644 i18n/zh/docusaurus-plugin-content-pages/case-studies/Raisecom-Tech/index.mdx
create mode 100644 src/pages/case-studies/Raisecom-Tech/index.mdx
diff --git a/i18n/zh/docusaurus-plugin-content-pages/case-studies/Raisecom-Tech/index.mdx b/i18n/zh/docusaurus-plugin-content-pages/case-studies/Raisecom-Tech/index.mdx
new file mode 100644
index 0000000000..f6335637c0
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-pages/case-studies/Raisecom-Tech/index.mdx
@@ -0,0 +1,26 @@
+---
+date: 2024-05-27
+title: 瑞斯康达科技股份有限公司
+subTitle:
+description: 采用KubeEdge作为智能监控方案实施的重要组成部分,有效完成了对工厂安全的AI监控,减少了安全事故的发生,提高了工厂的生产效率。
+tags:
+ - 用户案例
+---
+
+# 基于KubeEdge的智能监控方案
+
+## 挑战
+
+保障工业生产安全是瑞斯康达制造工厂的重要需求,传统工人的生产安全检测方式采用人工方式,速度慢、效率低,工人不遵守安全要求的情况仍时有发生,且容易被忽视,具有很大的安全隐患,影响工厂的生产效率。
+
+## 解决方案
+
+开发基于人工智能算法的工业智能监控应用,以取代人工监控。但仅有智能监控应用是不够的,智能边缘应用的部署和管理、云端训练与边缘推理的协同等新问题也随之出现,成为该解决方案在工业生产环境中大规模应用的瓶颈。
+
+中国电信研究院将KubeEdge作为智能监控方案实施的重要组成部分,帮助瑞斯康达科技解决该问题。中国电信研究院架构师Xiaohou Shi完成了该方案的设计。该案例通过工业视觉应用,结合深度学习算法,实时监控工厂工人的安全状态。引入KubeEdge作为边缘计算平台,用于管理边缘设备和智能监控应用的运行环境。通过KubeEdge,可以在云端对监控模型进行训练,并自动部署到边缘节点进行推理执行,提高运营效率,降低运维成本。
+
+## 优势
+
+在此应用场景中,KubeEdge完成了边缘应用的统一管理,同时KubeEdge还可以充分利用云边协同的优势,借助KubeEdge作为边缘计算平台,有效完成了对工厂安全的AI监控,减少了安全事故的发生,提高了工厂的生产效率。
+
+基于此成功案例,未来将在KubeEdge上部署更多深度学习算法,解决边缘计算方面的问题,未来也将与KubeEdge开展更多场景化工业智能应用的合作。
\ No newline at end of file
diff --git a/src/pages/case-studies/Raisecom-Tech/index.mdx b/src/pages/case-studies/Raisecom-Tech/index.mdx
new file mode 100644
index 0000000000..847962c07f
--- /dev/null
+++ b/src/pages/case-studies/Raisecom-Tech/index.mdx
@@ -0,0 +1,24 @@
+---
+date: 2024-05-27
+title: Raisecom Technology CO.,Ltd
+subTitle:
+description: Using KubeEdge as an important part of the implementation of the intelligent monitoring solution effectively completes the AI monitoring of factory safety, reduces the occurrence of safety accidents, and improves the production efficiency of the factory.
+
+tags:
+ - UserCase
+---
+
+# Intelligent monitoring solution based on KubeEdge
+
+## Challenge
+It is an important demand for the manufactory of Raisecom Technology to ensure the industrial production safety. Traditional workers' production safety was detected manually, which was slow and inefficient. The situation that workers did not obey the safety requirements still happened, and it could be ignored sometimes, which could generate great safety risks and affect the production efficiency of the factory.
+
+## Solution
+An industrial intelligent monitoring application with AI algorithms was developed to replace the manual method. An intelligent application alone was not enough and new problems arose such as the deployment and management of the intelligent edge application and the collaboration between training on the cloud and reasoning on the edge, which could become a bottleneck for the largescale application of the solution in the industrial production environment.
+
+China Telecom Research Institute used KubeEdge as an important part of the implementation of the intelligent monitoring solution to help Raisecom Technology to solve the problem. Architect Xiaohou Shi from China Telecom Research Institute completed the design of this solution. In this case, the safety status of factory workers was monitored by the industrial vision application in real time with the deep learning algorithm. KubeEdge was introduced as an edge computing platform for the management of the edge devices and the running environment of the intelligent monitoring application. The monitoring model could be trained on the cloud and deployed to the edge nodes for reasoning execution automatically via KubeEdge, which could improve the efficiency of the operation and reduce the cost of the maintenance.
+
+## Impact
+In this application scenario, KubeEdge completed the unified management of edge applications. KubeEdge could also make full use of the advantages of the collaboration of the cloud and edge. With the help of KubeEdge as the edge computing platform, the monitoring on safety of the manufactory with AI was completed effectively, which reduced the occurrence of safety accidents and improved the production efficiency of the manufactory.
+
+Based on this successful case, more deep learning algorithm will be deployed on KubeEdge to handle problems on edge computing. More cooperation about scenario-faced industrial intelligent application with KubeEdge will be carried out in the future.
From 0c2e057815b6208ba4ab3b0c11f461bd54c3d3c7 Mon Sep 17 00:00:00 2001
From: wbc6080
Date: Mon, 27 May 2024 17:10:28 +0800
Subject: [PATCH 18/20] add case study about XingHai IoT
Signed-off-by: wbc6080
Signed-off-by: hyp4293 <429302517@qq.com>
---
.../case-studies/XingHai/index.mdx | 30 +++++++++++++++++++
src/pages/case-studies/XingHai/index.mdx | 30 +++++++++++++++++++
2 files changed, 60 insertions(+)
create mode 100644 i18n/zh/docusaurus-plugin-content-pages/case-studies/XingHai/index.mdx
create mode 100644 src/pages/case-studies/XingHai/index.mdx
diff --git a/i18n/zh/docusaurus-plugin-content-pages/case-studies/XingHai/index.mdx b/i18n/zh/docusaurus-plugin-content-pages/case-studies/XingHai/index.mdx
new file mode 100644
index 0000000000..6aa505e6fe
--- /dev/null
+++ b/i18n/zh/docusaurus-plugin-content-pages/case-studies/XingHai/index.mdx
@@ -0,0 +1,30 @@
+---
+date: 2024-05-27
+title: 兴海物联科技有限公司
+subTitle:
+description: 兴海物联采用KubeEdge构建了云边端协同的智慧校园,大幅提升了校园管理效率。
+tags:
+ - 用户案例
+---
+
+# 基于KubeEdge构建智慧校园
+
+## 挑战
+
+兴海物联是一家利用建筑物联网平台、智能硬件、人工智能等技术,提供智慧楼宇综合解决方案的物联网企业,是中海物业智慧校园标准的制定者和践行者,是华为智慧校园解决方案核心全链条服务商。
+
+该公司服务客户遍及中国及全球80个主要城市,已交付项目741个,总建筑面积超过1.56亿平方米,业务涵盖高端住宅、商业综合体、超级写字楼、政府物业、工业园区等多种建筑类型。
+
+近年来,随着业务的拓展和园区业主对服务品质要求的不断提升,兴海物联致力于利用边缘计算和物联网技术构建可持续发展的智慧校园,提高园区运营和管理效率。
+
+## 解决方案
+
+如今兴海物联的服务领域越来越广泛,因此其解决方案需要具备可移植性和可复制性,需要保证数据的实时处理和安全的存储。KubeEdge以云原生开发和边云协同为设计理念,已成为兴海物联打造智慧校园不可或缺的一部分。
+
+- 容器镜像一次构建,随处运行,有效降低新建园区部署运维复杂度。
+- 边云协同使数据在边缘处理,确保实时性和安全性,并降低网络带宽成本。
+- KubeEdge 可以轻松添加硬件,并支持常见协议。无需二次开发。
+
+## 优势
+
+兴海物联基于KubeEdge和自有兴海物联云平台,构建了云边端协同的智慧校园,大幅提升了校园管理效率。在AI的助力下,近30%的重复性工作实现了自动化。未来,兴海物联还将继续与KubeEdge合作,推出基于KubeEdge的智慧校园解决方案。
\ No newline at end of file
diff --git a/src/pages/case-studies/XingHai/index.mdx b/src/pages/case-studies/XingHai/index.mdx
new file mode 100644
index 0000000000..28955d8761
--- /dev/null
+++ b/src/pages/case-studies/XingHai/index.mdx
@@ -0,0 +1,30 @@
+---
+date: 2024-05-27
+title: XingHai IoT
+subTitle:
+description: Xinghai IoT uses KubeEdge to build a smart campus with cloud-edge-device collaboration, which greatly improves campus management efficiency.
+tags:
+ - UserCase
+---
+
+# Building smart campuses based on KubeEdge
+
+## Challenge
+
+Xinghai IoT is an IoT company that provides comprehensive smart building solutions by leveraging a construction IoT platform, intelligent hardware, and AI. It is a creator and practitioner of smart campus standards for China Overseas Property Management and a core full-chain service provider of smart campus solutions from Huawei.
+
+The company serves its customers in 80 major cities in China and around the world. It has delivered 741 projects, covering more than 156 million square meters. Its business covers a diverse range of building types, such as high-end residential buildings, commercial complexes, super office buildings, government properties, and industrial parks.
+
+In recent years, as its business expands and occupant demands for service quality grow, Xinghai IoT has been committed to using edge computing and IoT to build sustainable smart campuses, improving efficiency for campus operations and management.
+
+## Highlights
+
+Xinghai IoT now offers services in a wide range of areas. Therefore, its solutions should be portable and replicable and need to ensure real-time data processing and secure data storage. KubeEdge, with services designed for cloud native development and edge-cloud synergy, has become an indispensable part of Xinghai IoT for building smart campuses.
+
+- Container images are built once to run anywhere, effectively reducing the deployment and O&M complexity of new campuses.
+- Edge-cloud synergy enables data to be processed at the edge, ensuring real-time performance and security and lowering network bandwidth costs.
+- KubeEdge makes adding hardware easy and supports common protocols. No secondary development is needed.
+
+## Benefits
+
+Xinghai IoT built a smart campus with cloud-edge-device synergy based on KubeEdge and its own Xinghai IoT cloud platform, greatly improving the efficiency of campus management. With AI assistance, nearly 30% of the repetitive work is automated. In the future, Xinghai IoT will continue to collaborate with KubeEdge to launch KubeEdge-based smart campus solutions.
\ No newline at end of file
From a5f98d9adf34a249c39dd335b753d101e9fde684 Mon Sep 17 00:00:00 2001
From: Shubham Singh
Date: Sat, 18 May 2024 00:14:47 +0530
Subject: [PATCH 19/20] Part 1: improving the install with keadm docs
Signed-off-by: Shubham Singh
Signed-off-by: hyp4293 <429302517@qq.com>
---
docs/setup/install-with-keadm.md | 35 +++++++++++++++++---------------
1 file changed, 19 insertions(+), 16 deletions(-)
diff --git a/docs/setup/install-with-keadm.md b/docs/setup/install-with-keadm.md
index 8b585bf53a..adbb4c9286 100644
--- a/docs/setup/install-with-keadm.md
+++ b/docs/setup/install-with-keadm.md
@@ -2,47 +2,50 @@
title: Installing KubeEdge with Keadm
sidebar_position: 3
---
-Keadm is used to install the cloud and edge components of KubeEdge. It is not responsible for installing K8s and runtime.
-Please refer [kubernetes-compatibility](https://github.com/kubeedge/kubeedge#kubernetes-compatibility) to get **Kubernetes compatibility** and determine what version of Kubernetes would be installed.
+Keadm is used to install the cloud and edge components of KubeEdge. It does not handle the installation of Kubernetes and its runtime environment.
-## Limitation
+Please refer to [Kubernetes compatibility](https://github.com/kubeedge/kubeedge#kubernetes-compatibility) documentation to check **Kubernetes compatibility** and ascertain the Kubernetes version to be installed.
-- Need super user rights (or root rights) to run.
+## Limitation
+- It Requires super user rights (or root rights) to run.
## Install keadm
-There're three ways to download a `keadm` binary
+There're three ways to download the `keadm` binary:
-- Download from [github release](https://github.com/kubeedge/kubeedge/releases).
+1. Download from [GitHub release](https://github.com/kubeedge/kubeedge/releases).
- Now KubeEdge github officially holds three arch releases: amd64, arm, arm64. Please download the right arch package according to your platform, with your expected version.
+ KubeEdge GitHub officially holds three architecture releases: amd64, arm, and arm64. Please download the correct package according to your platform and desired version.
+
```shell
wget https://github.com/kubeedge/kubeedge/releases/download/v1.12.1/keadm-v1.12.1-linux-amd64.tar.gz
tar -zxvf keadm-v1.12.1-linux-amd64.tar.gz
cp keadm-v1.12.1-linux-amd64/keadm/keadm /usr/local/bin/keadm
```
-- Download from dockerhub KubeEdge official release image.
+
+2. Download from the official KubeEdge release image on Docker Hub.
```shell
docker run --rm kubeedge/installation-package:v1.12.1 cat /usr/local/bin/keadm > /usr/local/bin/keadm && chmod +x /usr/local/bin/keadm
```
-- Build from source
+3. Build from Source
- ref: [build from source](./install-with-binary#build-from-source)
-
+- Refer to [build from source](./install-with-binary#build-from-source) for instructions.
## Setup Cloud Side (KubeEdge Master Node)
-By default ports `10000` and `10002` in your cloudcore needs to be accessible for your edge nodes.
+By default, ports `10000` and `10002` on your cloudcore needs to be accessible for your edge nodes.
-**IMPORTANT NOTE:**
+**IMPORTANT NOTES:**
-1. At least one of kubeconfig or master must be configured correctly, so that it can be used to verify the version and other info of the k8s cluster.
-2. Please make sure edge node can connect cloud node using local IP of cloud node, or you need to specify public IP of cloud node with `--advertise-address` flag.
-3. `--advertise-address` is the address exposed by the cloud side (will be added to the SANs of the CloudCore certificate), the default value is the local IP.
+1. At least one of `kubeconfig` or `master` must be configured correctly to verify the version and other information of the Kubernetes cluster.
+
+2. Ensure the edge node can connect to the cloud node using the local IP of cloud node, or specify the public IP of the cloud node with the `--advertise-address` flag.
+
+3. `--advertise-address` is the address exposed by the cloud side (it will be added to the SANs of the CloudCore certificate), the default value is the local IP.
### keadm init
From 628e9d7237348955cc034fd972c0a989b2b2a86c Mon Sep 17 00:00:00 2001
From: hyp4293 <429302517@qq.com>
Date: Wed, 10 Jul 2024 23:12:40 +0800
Subject: [PATCH 20/20] fix merge
---
docs/setup/install-with-keadm.md | 148 +++++++++++++++++++------------
1 file changed, 93 insertions(+), 55 deletions(-)
diff --git a/docs/setup/install-with-keadm.md b/docs/setup/install-with-keadm.md
index adbb4c9286..3967edbc81 100644
--- a/docs/setup/install-with-keadm.md
+++ b/docs/setup/install-with-keadm.md
@@ -37,7 +37,7 @@ There're three ways to download the `keadm` binary:
## Setup Cloud Side (KubeEdge Master Node)
-By default, ports `10000` and `10002` on your cloudcore needs to be accessible for your edge nodes.
+By default, ports `10000` and `10002` on your CloudCore needs to be accessible for your edge nodes.
**IMPORTANT NOTES:**
@@ -45,11 +45,11 @@ By default, ports `10000` and `10002` on your cloudcore needs to be accessible f
2. Ensure the edge node can connect to the cloud node using the local IP of cloud node, or specify the public IP of the cloud node with the `--advertise-address` flag.
-3. `--advertise-address` is the address exposed by the cloud side (it will be added to the SANs of the CloudCore certificate), the default value is the local IP.
+3. `--advertise-address` is the address exposed by the cloud side (it will be added to the SANs of the CloudCore certificate). The default value is the local IP.
### keadm init
-`keadm init` provides a solution for integrating Cloudcore helm chart. Cloudcore will be deployed to cloud nodes in container mode.
+`keadm init` provides a solution for integrating the Cloudcore Helm chart. Cloudcore will be deployed to cloud nodes in container mode.
Example:
@@ -58,6 +58,7 @@ keadm init --advertise-address="THE-EXPOSED-IP" --profile version=v1.12.1 --kube
```
Output:
+
```shell
Kubernetes version verification passed, KubeEdge installation will start...
CLOUDCORE started
@@ -69,7 +70,8 @@ STATUS: deployed
REVISION: 1
```
-You can run `kubectl get all -n kubeedge` to ensure that cloudcore start successfully just like below.
+You can run `kubectl get all -n kubeedge` to ensure that Cloudcore start successfully, as shown below.
+
```shell
# kubectl get all -n kubeedge
NAME READY STATUS RESTARTS AGE
@@ -85,11 +87,13 @@ NAME DESIRED CURRENT READY AGE
replicaset.apps/cloudcore-56b8454784 1 1 1 46s
```
-**IMPORTANT NOTE:**
+**IMPORTANT NOTES:**
1. Set flags `--set key=value` for cloudcore helm chart could refer to [KubeEdge Cloudcore Helm Charts README.md](https://github.com/kubeedge/kubeedge/blob/master/manifests/charts/cloudcore/README.md).
+
2. You can start with one of Keadm’s built-in configuration profiles and then further customize the configuration for your specific needs. Currently, the built-in configuration profile keyword is `version`. Refer to [version.yaml](https://github.com/kubeedge/kubeedge/blob/master/manifests/profiles/version.yaml) as `values.yaml`, you can make your custom values file here, and add flags like `--profile version=v1.9.0 --set key=value` to use this profile. `--external-helm-root` flag provides a feature function to install the external helm charts like edgemesh.
-3. `keadm init` deploy cloudcore in container mode, if you want to deploy cloudcore as binary, please ref [`keadm deprecated init`](#keadm-deprecated-init) below.
+
+3. `keadm init` by default, deploys Cloudcore in container mode. If you want to deploy Cloudcore as a binary, please refer to [`keadm deprecated init`](#keadm-deprecated-init).
Example:
@@ -97,7 +101,7 @@ Example:
keadm init --set server.advertiseAddress="THE-EXPOSED-IP" --set server.nodeName=allinone --kube-config=/root/.kube/config --force --external-helm-root=/root/go/src/github.com/edgemesh/build/helm --profile=edgemesh
```
-If you are familiar with the helm chart installation, please refer to [KubeEdge Helm Charts](https://github.com/kubeedge/kubeedge/tree/master/manifests/charts).
+If you are familiar with the Helm chart installation, please refer to [KubeEdge Helm Charts](https://github.com/kubeedge/kubeedge/tree/master/manifests/charts).
**SPECIAL SCENARIO:**
@@ -112,24 +116,27 @@ To handle kube-proxy, you can refer to the [two methods](#anchor-name) mentioned
### keadm manifest generate
-You can also get the manifests with `keadm manifest generate`.
+You can generate the manifests using `keadm manifest generate`.
Example:
```shell
keadm manifest generate --advertise-address="THE-EXPOSED-IP" --kube-config=/root/.kube/config > kubeedge-cloudcore.yaml
```
-> Add --skip-crds flag to skip outputing the CRDs
+
+> Add `--skip-crds` flag to skip outputting the CRDs.
### keadm deprecated init
-`keadm deprecated init` will install cloudcore in binary process, generate the certs and install the CRDs. It also provides a flag by which a specific version can be set.
+`keadm deprecated init` installs Cloudcore in binary process, generates certificates, and installs the CRDs. It also provides a flag to set a specific version.
-**IMPORTANT NOTE:**
+**IMPORTANT NOTES:**
-1. At least one of kubeconfig or master must be configured correctly, so that it can be used to verify the version and other info of the k8s cluster.
-2. Please make sure edge node can connect cloud node using local IP of cloud node, or you need to specify public IP of cloud node with `--advertise-address` flag.
-3. `--advertise-address` is the address exposed by the cloud side (will be added to the SANs of the CloudCore certificate), the default value is the local IP.
+1. At least one of `kubeconfig` or `master` must be configured correctly to verify the version and other information of the Kubernetes cluster.
+
+2. Ensure the edge node can connect to the cloud node using the local IP of cloud node, or specify the public IP of the cloud node with the `--advertise-address` flag.
+
+3. `--advertise-address` is the address exposed by the cloud side (it will be added to the SANs of the CloudCore certificate). The default value is the local IP.
Example:
```shell
@@ -144,7 +151,8 @@ keadm manifest generate --advertise-address="THE-EXPOSED-IP" --kube-config=/root
CloudCore started
```
- You can run `ps -elf | grep cloudcore` command to ensure that cloudcore is running successfully.
+ You can run the `ps -elf | grep cloudcore` command to ensure that Cloudcore is running successfully.
+
```shell
# ps -elf | grep cloudcore
0 S root 2736434 1 1 80 0 - 336281 futex_ 11:02 pts/2 00:00:00 /usr/local/bin/cloudcore
@@ -155,7 +163,7 @@ keadm manifest generate --advertise-address="THE-EXPOSED-IP" --kube-config=/root
### Get Token From Cloud Side
-Run `keadm gettoken` in **cloud side** will return the token, which will be used when joining edge nodes.
+Run `keadm gettoken` on the **cloud side** to retrieve the token, which will be used when joining edge nodes.
```shell
# keadm gettoken
@@ -165,7 +173,8 @@ Run `keadm gettoken` in **cloud side** will return the token, which will be used
### Join Edge Node
#### keadm join
-`keadm join` will install edgecore. It also provides a flag by which a specific version can be set. It will pull image [kubeedge/installation-package](https://hub.docker.com/r/kubeedge/installation-package) from dockerhub and copy binary `edgecore` from container to hostpath, and then start `edgecore` as a system service.
+
+`keadm join` installs Edgecore. It also provides a flag to set a specific version. It pulls the image [kubeedge/installation-package](https://hub.docker.com/r/kubeedge/installation-package) from Docker Hub, copies the `edgecore` binary from container to the hostpath, and then starts `edgecore` as a system service.
Example:
@@ -173,10 +182,13 @@ Example:
keadm join --cloudcore-ipport="THE-EXPOSED-IP":10000 --token=27a37ef16159f7d3be8fae95d588b79b3adaaf92727b72659eb89758c66ffda2.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTAyMTYwNzd9.JBj8LLYWXwbbvHKffJBpPd5CyxqapRQYDIXtFZErgYE --kubeedge-version=v1.12.1
```
-**IMPORTANT NOTE:**
-1. `--cloudcore-ipport` flag is a mandatory flag.
-2. If you want to apply certificate for edge node automatically, `--token` is needed.
-3. The kubeEdge version used in cloud and edge side should be same.
+**IMPORTANT NOTES:**
+
+1. The `--cloudcore-ipport` flag is mandatory.
+
+2. If you want to apply certificate for the edge node automatically, the `--token` is needed.
+
+3. The KubeEdge version used on the cloud and edge sides should be the same.
Output:
@@ -185,7 +197,8 @@ Output:
KubeEdge edgecore is running, For logs visit: journalctl -u edgecore.service -xe
```
-you can run `systemctl status edgecore` command to ensure edgecore is running successfully
+You can run the `systemctl status edgecore` command to ensure Edgecore is running successfully:
+
```shell
# systemctl status edgecore
● edgecore.service
@@ -198,14 +211,17 @@ you can run `systemctl status edgecore` command to ensure edgecore is running su
```
#### keadm deprecated join
-You can also use `keadm deprecated join` to start edgecore from release pacakge. It will download release packages from [KubeEdge release website](https://github.com/kubeedge/kubeedge/releases), and then start `edgecore` in binary progress.
+
+You can also use `keadm deprecated join` to start Edgecore from the release pacakge. It will download release packages from [KubeEdge release website](https://github.com/kubeedge/kubeedge/releases), and then start `edgecore` in binary progress.
Example:
+
```shell
keadm deprecated join --cloudcore-ipport="THE-EXPOSED-IP":10000 --token=27a37ef16159f7d3be8fae95d588b79b3adaaf92727b72659eb89758c66ffda2.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTAyMTYwNzd9.JBj8LLYWXwbbvHKffJBpPd5CyxqapRQYDIXtFZErgYE --kubeedge-version=1.12.0
```
Output:
+
```shell
MQTT is installed in this host
...
@@ -213,59 +229,63 @@ KubeEdge edgecore is running, For logs visit: journalctl -u edgecore.service -xe
```
### Deploy demo on edge nodes
-ref: [Deploy demo on edge nodes](./install-with-binary#deploy-demo-on-edge-nodes)
+
+Refer to the [Deploy demo on edge nodes](./install-with-binary#deploy-demo-on-edge-nodes) documentation.
### Enable `kubectl logs` Feature
-Before deploying metrics-server , `kubectl logs` feature must be activated:
+Before deploying the metrics-server, the `kubectl logs` feature must be activated:
-> Note that if cloudcore is deployed using helm:
-> - The stream certs are generated automatically and cloudStream feature is enabled by default. So, step 1-3 could
- be skipped unless customization is needed.
-> - Also, step 4 could be finished by iptablesmanager component by default, manually operations are not needed.
- Refer to the [cloudcore helm values](https://github.com/kubeedge/kubeedge/blob/master/manifests/charts/cloudcore/values.yaml#L67).
-> - Operations in step 5-6 related to cloudcore could also be skipped.
+> Note for Helm deployments:
+> - Stream certificates are generated automatically and the CloudStream feature is enabled by default. Therefore, Steps 1-3 can be skipped unless customization is needed.
+> - Step 4 could be finished by iptablesmanager component by default, so manual operations are not needed. Refer to the [cloudcore helm values](https://github.com/kubeedge/kubeedge/blob/master/manifests/charts/cloudcore/values.yaml#L67).
+> - Operations in Steps 5-6 related to Cloudcore can also be skipped.
-1. Make sure you can find the kubernetes `ca.crt` and `ca.key` files. If you set up your kubernetes cluster by `kubeadm` , those files will be in `/etc/kubernetes/pki/` dir.
+1. Ensure you can locate the Kubernetes `ca.crt` and `ca.key` files. If you set up your Kubernetes cluster with `kubeadm`, these files will be in the `/etc/kubernetes/pki/` directory.
``` shell
ls /etc/kubernetes/pki/
```
-2. Set `CLOUDCOREIPS` env. The environment variable is set to specify the IP address of cloudcore, or a VIP if you have a highly available cluster.
- Set `CLOUDCORE_DOMAINS` instead if Kubernetes uses domain names to communicate with cloudcore.
+2. Set the `CLOUDCOREIPS` environment variable to specify the IP address of Cloudcore, or a VIP if you have a highly available cluster. Set `CLOUDCORE_DOMAINS` instead if Kubernetes uses domain names to communicate with Cloudcore.
```bash
export CLOUDCOREIPS="192.168.0.139"
```
- (Warning: the same **terminal** is essential to continue the work, or it is necessary to type this command again.) Checking the environment variable with the following command:
+
+ (Warning: the same **terminal** is essential to continue the work, or it is necessary to type this command again). You can check the environment variable with the following command:
+
``` shell
echo $CLOUDCOREIPS
```
-3. Generate the certificates for **CloudStream** on cloud node, however, the generation file is not in the `/etc/kubeedge/`, we need to copy it from the repository which was git cloned from GitHub.
- Change user to root:
+3. Generate the certificates for **CloudStream** on the cloud node. The generation file is not in `/etc/kubeedge/`, so it needs to be copied from the repository cloned from GitHub. Switch to the root user:
+
```shell
sudo su
```
- Copy certificates generation file from original cloned repository:
+
+ Copy the certificate generation file from the original cloned repository:
+
```shell
cp $GOPATH/src/github.com/kubeedge/kubeedge/build/tools/certgen.sh /etc/kubeedge/
```
+
Change directory to the kubeedge directory:
+
```shell
cd /etc/kubeedge/
```
+
Generate certificates from **certgen.sh**
```bash
/etc/kubeedge/certgen.sh stream
```
-4. It is needed to set iptables on the host. (This command should be executed on every apiserver deployed node.)(In this case, this the master node, and execute this command by root.)
- Run the following command on the host on which each apiserver runs:
+4. It is needed to set iptables on the host. (This command should be executed on every apiserver deployed node.)(In this case, this the master node, and execute this command by root.) Run the following command on the host where each apiserver runs:
- **Note:** You need to get the configmap first, which contains all the cloudcore ips and tunnel ports.
+ **Note:** First, get the configmap containing all the Cloudcore IPs and tunnel ports:
```bash
kubectl get cm tunnelport -nkubeedge -oyaml
@@ -279,7 +299,7 @@ Before deploying metrics-server , `kubectl logs` feature must be activated:
...
```
- Then set all the iptables for multi cloudcore instances to every node that apiserver runs. The cloudcore ips and tunnel ports should be get from configmap above.
+ Then set all the iptables for multi cloudcore instances to every node that apiserver runs. The cloudcore ips and tunnel ports should be obtained from the configmap above.
```bash
iptables -t nat -A OUTPUT -p tcp --dport $YOUR-TUNNEL-PORT -j DNAT --to $YOUR-CLOUDCORE-IP:10003
@@ -287,22 +307,24 @@ Before deploying metrics-server , `kubectl logs` feature must be activated:
iptables -t nat -A OUTPUT -p tcp --dport 10351 -j DNAT --to 192.168.1.17:10003
```
- If you are not sure if you have setting of iptables, and you want to clean all of them.
- (If you set up iptables wrongly, it will block you out of your `kubectl logs` feature)
+ If you are unsure about the current iptables settings and want to clean all of them. (If you set up iptables wrongly, it will block you out of your `kubectl logs` feature)
+
The following command can be used to clean up iptables:
+
``` shell
iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
```
-
5. Modify **both** `/etc/kubeedge/config/cloudcore.yaml` and `/etc/kubeedge/config/edgecore.yaml` on cloudcore and edgecore. Set up **cloudStream** and **edgeStream** to `enable: true`. Change the server IP to the cloudcore IP (the same as $CLOUDCOREIPS).
Open the YAML file in cloudcore:
+
```shell
sudo nano /etc/kubeedge/config/cloudcore.yaml
```
Modify the file in the following part (`enable: true`):
+
```yaml
cloudStream:
enable: true
@@ -317,10 +339,13 @@ Before deploying metrics-server , `kubectl logs` feature must be activated:
```
Open the YAML file in edgecore:
+
``` shell
sudo nano /etc/kubeedge/config/edgecore.yaml
```
+
Modify the file in the following part (`enable: true`), (`server: 192.168.0.193:10004`):
+
``` yaml
edgeStream:
enable: true
@@ -338,24 +363,32 @@ Before deploying metrics-server , `kubectl logs` feature must be activated:
``` shell
sudo su
```
- cloudCore in process mode:
+
+ If CloudCore is running in process mode:
+
``` shell
pkill cloudcore
nohup cloudcore > cloudcore.log 2>&1 &
```
- or cloudCore in kubernetes deployment mode:
+
+ If CloudCore is running in Kubernetes deployment mode:
+
``` shell
kubectl -n kubeedge rollout restart deployment cloudcore
```
- edgeCore:
+
+ EdgeCore:
+
``` shell
systemctl restart edgecore.service
```
- If you fail to restart edgecore, check if that is because of `kube-proxy` and kill it. **kubeedge** reject it by default, we use a succedaneum called [edgemesh](https://github.com/kubeedge/kubeedge/blob/master/docs/proposals/edgemesh-design.md)
**Note:** the importance is to avoid `kube-proxy` being deployed on edgenode. There are two methods to solve it:
- 1. Add the following settings by calling `kubectl edit daemonsets.apps -n kube-system kube-proxy`:
+ **Note:** It is important to avoid `kube-proxy` being deployed on edgenode and there are two methods to achieve this:
+
+ - **Method 1:** Add the following settings by calling `kubectl edit daemonsets.apps -n kube-system kube-proxy`:
+
``` yaml
spec:
template:
@@ -368,24 +401,26 @@ Before deploying metrics-server , `kubectl logs` feature must be activated:
- key: node-role.kubernetes.io/edge
operator: DoesNotExist
```
- or just run the below command directly in the shell window:
+
+ or just run the following command directly in the shell window:
+
```shell
kubectl patch daemonset kube-proxy -n kube-system -p '{"spec": {"template": {"spec": {"affinity": {"nodeAffinity": {"requiredDuringSchedulingIgnoredDuringExecution": {"nodeSelectorTerms": [{"matchExpressions": [{"key": "node-role.kubernetes.io/edge", "operator": "DoesNotExist"}]}]}}}}}}}'
```
- 2. If you still want to run `kube-proxy`, ask **edgecore** not to check the environment by adding the env variable in `edgecore.service` :
+ - **Method 2:** If you still want to run `kube-proxy`, instruct **edgecore** not to check the environment by adding the environment variable in `edgecore.service` :
``` shell
sudo vi /etc/kubeedge/edgecore.service
```
- - Add the following line into the **edgecore.service** file:
+ Add the following line into the **edgecore.service** file:
``` shell
Environment="CHECK_EDGECORE_ENVIRONMENT=false"
```
- - The final file should look like this:
+ The final file should look like this:
```
Description=edgecore.service
@@ -400,6 +435,7 @@ Before deploying metrics-server , `kubectl logs` feature must be activated:
```
### Support Metrics-server in Cloud
+
1. The realization of this function point reuses cloudstream and edgestream modules. So you also need to perform all steps of *Enable `kubectl logs` Feature*.
2. Since the kubelet ports of edge nodes and cloud nodes are not the same, the current release version of metrics-server(0.3.x) does not support automatic port identification (It is the 0.4.0 feature), so you need to manually compile the image from master branch yourself now.
@@ -471,7 +507,8 @@ Before deploying metrics-server , `kubectl logs` feature must be activated:
- charlie-latest
```
-**IMPORTANT NOTE:**
+**IMPORTANT NOTES:**
+
1. Metrics-server needs to use hostnetwork network mode.
2. Use the image compiled by yourself and set imagePullPolicy to Never.
@@ -520,4 +557,5 @@ It provides a flag for users to specify kubeconfig path, the default path is `/r
```
### Node
+
`keadm reset` or `keadm deprecated reset` will stop `edgecore` and it doesn't uninstall/remove any of the pre-requisites.