diff --git a/.github/PULL_REQUEST_TEMPLATE.md b/.github/PULL_REQUEST_TEMPLATE.md index 4aa196e093..3010f077c1 100644 --- a/.github/PULL_REQUEST_TEMPLATE.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -1,8 +1,16 @@ * **Please check if the PR fulfills these requirements** + - [ ] The commit message follows our guidelines - [ ] Tests for the changes have been added (for bug fixes / features) - [ ] Docs have been added / updated (for bug fixes / features) +**Which issue(s) this PR fixes**: + +Fixes # * **What kind of change does this PR introduce?** (Bug fix, feature, docs update, ...) diff --git a/blog/authors.yml b/blog/authors.yml index 070c2ddb95..1b48f26b3e 100644 --- a/blog/authors.yml +++ b/blog/authors.yml @@ -52,4 +52,9 @@ Trilok Geer: KubeEdge SIG Release: name: KubeEdge SIG Release url: https://github.com/kubeedge-bot - image_url: https://avatars.githubusercontent.com/u/48982446?v=4 \ No newline at end of file + image_url: https://avatars.githubusercontent.com/u/48982446?v=4 + +Tomoya Fujita: + name: Tomoya Fujita + url: https://github.com/fujitatomoya + image_url: https://avatars.githubusercontent.com/u/43395114?v=4 \ No newline at end of file diff --git a/blog/enable-cilium/images/overview.png b/blog/enable-cilium/images/overview.png new file mode 100644 index 0000000000..0a69418bba Binary files /dev/null and b/blog/enable-cilium/images/overview.png differ diff --git a/blog/enable-cilium/index.mdx b/blog/enable-cilium/index.mdx new file mode 100644 index 0000000000..f9eba30bb5 --- /dev/null +++ b/blog/enable-cilium/index.mdx @@ -0,0 +1,483 @@ +--- +authors: +- Tomoya Fujita +categories: +- General +- Announcements +date: 2024-06-04 +draft: false +lastmod: 2024-06-04 +summary: KubeEdge meets Cilium !!! +tags: +- KubeEdge +- kubeedge +- edge computing +- kubernetes edge computing +- K8s edge orchestration +- edge computing platform +- cloud native +- iot +- iiot +- Cilium +- CNI +title: KubeEdge meets Cilium !!! +--- + +This blog introduces how to enable [Cilium](https://github.com/cilium/cilium) Container Network Interface with KubeEdge. + +## Why [Cilium](https://github.com/cilium/cilium) for KubeEdge + +[Cilium](https://github.com/cilium/cilium) is the one of the most advanced and efficient container network interface plugin for Kubernetes, that provides network connectivity and security for containerized applications in Kubernetes clusters. +It leverages [eBPF (extended Berkeley Packet Filter)](https://ebpf.io/) technology to implement networking and security policies at the Linux kernel level, allowing for high-performance data plane operations and fine-grained security controls. + +And KubeEdge extends the cluster orchestration capability down to edge environments to provide unified cluster management and sophisticated edge specific features. + +Enabling [Cilium](https://github.com/cilium/cilium) with KubeEdge allows us to take advantage of both benefits even for edge computing environments. +We can deploy the application containers where `EdgeCore` running and bind [Cilium](https://github.com/cilium/cilium) to connect with workloads in the cloud infrastructure. +This is because [Cilium](https://github.com/cilium/cilium) can also enable [WireGuard](https://docs.cilium.io/en/latest/security/network/encryption-wireguard/) VPN with transparent encryption of traffic between Cilium-managed endpoints. + +Further more, we can also rely on [Cilium Tetragon Security Observability and Runtime Enforcement](https://github.com/cilium/tetragon) to confine security risk and vulnerability in edge environment. + + + +## How to enable [Cilium](https://github.com/cilium/cilium) with KubeEdge + +The following procedures to set up a simple cluster system with Kubernetes and KubeEdge with [Cilium](https://github.com/cilium/cilium). +Since this is a new approach and still **beta** phase, the following manual operations are required to do so. + +After all the operations, we can develop the following cluster configuration with KubeEdge with [Cilium](https://github.com/cilium/cilium). + +![overview](./images/overview.png) + + + +- [Prerequisites](#prerequisites) +- [Kubernetes Master Setup](#kubernetes-master-setup) +- [Cilium Install and Setup](#cilium-install-and-setup) +- [KubeEdge CloudCore Setup](#kubeedge-cloudcore-setup) +- [KubeEdge EdgeCore Setup](#kubeedge-edgecore-setup) +- [Check Cilium Connectivity from Pods](#Check-cilium-connectivity-from-pods) + + + +### Prerequisites + +- [KubeEdge Release v1.16](https://github.com/kubeedge/kubeedge/blob/master/CHANGELOG/CHANGELOG-1.16.md) or later required. + + To enable Cilium with KubeEdge, we must use [KubeEdge Release v1.16](https://github.com/kubeedge/kubeedge/blob/master/CHANGELOG/CHANGELOG-1.16.md) or later. + This is because that `cilium-agent` needs to issue `InClusterConfig` APIs to Kubernetes API server to configure `cilium-agent`. + This should be no problem with Kubernetes nodes, but with KubeEdge those API requests and responses need to be bypassed via [KubeEdge MetaManager](https://kubeedge.io/docs/architecture/edge/metamanager/). + You can see [KubeEdge EdgeCore supports Cilium CNI](https://github.com/kubeedge/kubeedge/issues/4844) for more details. + +- Compatible Kubernetes version with [KubeEdge Release v1.16](https://github.com/kubeedge/kubeedge/blob/master/CHANGELOG/CHANGELOG-1.16.md). + + You can find compatible and supported Kubernetes version [here](https://github.com/kubeedge/kubeedge?tab=readme-ov-file#kubernetes-compatibility). + +- It Requires super user rights (or root rights) to run commands. + + + +### Kubernetes Master Setup + +Refer to [KubeEdge Setup Prerequisites](https://kubeedge.io/docs/category/prerequisites), and set up the Kubernetes API server as following. + +``` +### Check node status +> kubectl get nodes -o wide +NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME +tomoyafujita Ready control-plane 25s v1.26.15 AA.BBB.CCC.DD Ubuntu 20.04.6 LTS 5.15.0-102-generic containerd://1.6.32 + +### Taint this node so that CloudCore can be deployed on the control-plane +> kubectl taint node tomoyafujita node-role.kubernetes.io/control-plane:NoSchedule- +node/tomoyafujita untainted +> kubectl get nodes -o json | jq '.items[].spec.taints' +null +``` + + + +### Cilium Install and Setup + +Refer to [Cilium Quick Installation](https://docs.cilium.io/en/stable/gettingstarted/k8s-install-default/), install and set up cilium deployments in the cluster. + +``` +> cilium version +cilium-cli: v0.16.9 compiled with go1.22.3 on linux/amd64 +cilium image (default): v1.15.5 +cilium image (stable): v1.15.5 +cilium image (running): unknown. Unable to obtain cilium version. Reason: release: not found +``` + +and install Cilium with enabling wireguard VPN in the cluster, + +``` +> cilium install --set encryption.enabled=true --set encryption.type=wireguard --set encryption.wireguard.persistentKeepalive=60 +... + +> cilium status + /¯¯\ + /¯¯\__/¯¯\ Cilium: OK + \__/¯¯\__/ Operator: OK + /¯¯\__/¯¯\ Envoy DaemonSet: disabled (using embedded mode) + \__/¯¯\__/ Hubble Relay: disabled + \__/ ClusterMesh: disabled + +Deployment cilium-operator Desired: 1, Ready: 1/1, Available: 1/1 +DaemonSet cilium Desired: 1, Ready: 1/1, Available: 1/1 +Containers: cilium Running: 1 + cilium-operator Running: 1 +Cluster Pods: 1/2 managed by Cilium +Helm chart version: +Image versions cilium quay.io/cilium/cilium:v1.15.5@sha256:4ce1666a73815101ec9a4d360af6c5b7f1193ab00d89b7124f8505dee147ca40: 1 + cilium-operator quay.io/cilium/operator-generic:v1.15.5@sha256:f5d3d19754074ca052be6aac5d1ffb1de1eb5f2d947222b5f10f6d97ad4383e8: 1 +``` + +Add `nodeAffinity` for Cilium `DaemonSet` to make sure these pods are only created on cloud nodes. +These Cilium pods are generic Cilium `DaemonSet`, so supposed to be running on cloud nodes but where `EdgeCore` running. + +``` +### Edit Cilium DaemonSet with the following patch +> kubectl edit ds -n kube-system cilium +``` + +```patch +diff --git a/cilium-kubelet.yaml b/cilium-kubelet.yaml +index 21881e1..9946be9 100644 +--- a/cilium-kubelet.yaml ++++ b/cilium-kubelet.yaml +@@ -29,6 +29,12 @@ spec: + k8s-app: cilium + spec: + affinity: ++ nodeAffinity: ++ requiredDuringSchedulingIgnoredDuringExecution: ++ nodeSelectorTerms: ++ - matchExpressions: ++ - key: node-role.kubernetes.io/edge ++ operator: DoesNotExist + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: +``` + +After editing, Cilium pods will be restarted. + + + +### KubeEdge CloudCore Setup + +1st of all, we need to install `Keadm` with the official procedure [Installing KubeEdge with Keadm](https://kubeedge.io/docs/setup/install-with-keadm). + +In this blog, we use `Keadm v1.16.1` as following. + +```bash +### Install v1.16.1 keadm command +> wget https://github.com/kubeedge/kubeedge/releases/download/v1.16.1/keadm-v1.16.1-linux-amd64.tar.gz +> tar -zxvf keadm-v1.16.1-linux-amd64.tar.gz +> cp keadm-v1.16.1-linux-amd64/keadm/keadm /usr/local/bin + +> keadm version +version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.1", GitCommit:"bd7b42acbfbe3a453c7bb75a6bb8f1e8b3db7415", GitTreeState:"clean", BuildDate:"2024-03-27T02:57:08Z", GoVersion:"go1.20.10", Compiler:"gc", Platform:"linux/amd64"} +``` + +and then, start the `CloudCore` with `v1.16.1`. + +``` +> keadm init --advertise-address="AA.BBB.CCC.DD" --profile version=v1.16.1 --kube-config=/root/.kube/config +Kubernetes version verification passed, KubeEdge installation will start... +CLOUDCORE started +=========CHART DETAILS======= +NAME: cloudcore +LAST DEPLOYED: Tue Jun 4 17:19:15 2024 +NAMESPACE: kubeedge +STATUS: deployed +REVISION: 1 +``` + +After `CloudCore` is started, we also need to enable `DynamicControllers`. + +``` +### edit ConfigMap of CloudCore to enable dynamicController +> kubectl edit cm -n kubeedge cloudcore +> kubectl delete pod -n kubeedge --selector=kubeedge=cloudcore + +### Check ConfigMap +> kubectl get cm -n kubeedge cloudcore -o yaml | grep "dynamicController" -A 1 + dynamicController: + enable: true +``` + +To manage and handle the APIs from `MetaManager` that are originally come from Cilium running in the edge nodes, we need to give access permission to `CloudCore` by editing `clusterRole` and `clusterRolebinding`. + +`clusterRole`: + +``` +### Edit and apply the following patch +> kubectl edit clusterrole cilium +``` + +```patch +diff --git a/cilium-clusterrole.yaml b/cilium-clusterrole.yaml +index 736e35c..fd5512e 100644 +--- a/cilium-clusterrole.yaml ++++ b/cilium-clusterrole.yaml +@@ -66,6 +66,7 @@ rules: + verbs: + - list + - watch ++ - get + - apiGroups: + - cilium.io + resources: +``` + +`clusterRolebinding`: + +``` +### Edit and apply the following patch +> kubectl edit clusterrolebinding cilium +``` + +```patch +diff --git a/cilium-clusterrolebinding.yaml b/cilium-clusterrolebinding.yaml +index 9676737..ac956de 100644 +--- a/cilium-clusterrolebinding.yaml ++++ b/cilium-clusterrolebinding.yaml +@@ -12,3 +12,9 @@ subjects: + - kind: ServiceAccount + name: cilium + namespace: kube-system ++- kind: ServiceAccount ++ name: cloudcore ++ namespace: kubeedge ++- kind: ServiceAccount ++ name: cloudcore ++ namespace: default +``` + +Finally, we get the token after `CloudCore` has restarted. + +``` +> keadm gettoken + +``` + + + +### KubeEdge EdgeCore Setup + +With the token provided above, we can start the `EdgeCore` to join the cluster system. + +``` +> keadm join --cloudcore-ipport=AA.BBB.CCC.DD:10000 --kubeedge-version=v1.16.1 --cgroupdriver=systemd --token +... +I0604 21:36:31.040859 2118064 join_others.go:265] KubeEdge edgecore is running, For logs visit: journalctl -u edgecore.service -xe +I0604 21:36:41.050154 2118064 join.go:94] 9. Install Complete! + +> systemctl status edgecore +● edgecore.service + Loaded: loaded (/etc/systemd/system/edgecore.service; enabled; vendor preset: enabled) + Active: active (running) since Tue 2024-06-04 21:36:31 PDT; 40s ago + Main PID: 2118341 (edgecore) + Tasks: 24 (limit: 18670) + Memory: 31.8M + CPU: 849ms + CGroup: /system.slice/edgecore.service + └─2118341 /usr/local/bin/edgecore +``` + +After `EdgeCore` is started, we need to enable `ServiceBus` and `MetaServer` to edit `edgecore.yaml`. + + +``` +### Edit and apply the following patch +> vi /etc/kubeedge/config/edgecore.yaml + +### Restart edgecore systemd-service +> systemctl restart edgecore +``` + +```patch +diff --git a/edgecore.yaml b/edgecore.yaml +index 8d17418..5391776 100644 +--- a/edgecore.yaml ++++ b/edgecore.yaml +@@ -62,6 +62,8 @@ modules: + cgroupDriver: systemd + cgroupsPerQOS: true + clusterDomain: cluster.local ++ clusterDNS: ++ - 10.96.0.10 + configMapAndSecretChangeDetectionStrategy: Get + containerLogMaxFiles: 5 + containerLogMaxSize: 10Mi +@@ -151,7 +151,7 @@ modules: + enable: true + metaServer: + apiAudiences: null +- enable: false ++ enable: true + server: 127.0.0.1:10550 + serviceAccountIssuers: + - https://kubernetes.default.svc.cluster.local +@@ -161,7 +161,7 @@ modules: + tlsPrivateKeyFile: /etc/kubeedge/certs/server.key + remoteQueryTimeout: 60 + serviceBus: +- enable: false ++ enable: true + port: 9060 + server: 127.0.0.1 + timeout: 60 +``` + +And then, we need to create another `DaemonSet` of `cilium-agent` only for `EdgeCore` nodes. +We need to deploy `cilium-agent` pods to the nodes where KubeEdge `EdgeCore` runs and labeled with `kubernetes.io/edge=`. +Besides that, `cilium-agent` needs to query the APIs to `MetaServer` instead of Kubernetes API-server that is required to keep the edge autonomy provided by KubeEdge. + +``` +### Dump original Cilium DaemonSet configuration +> kubectl get ds -n kube-system cilium -o yaml > cilium-edgecore.yaml + +### Edit and apply the following patch +> vi cilium-edgecore.yaml + +### Deploy cilium-agent aligns with edgecore +> kubectl apply -f cilium-edgecore.yaml +``` + +```patch +diff --git a/cilium-edgecore.yaml b/cilium-edgecore.yaml +index bff0f0b..3d941d1 100644 +--- a/cilium-edgecore.yaml ++++ b/cilium-edgecore.yaml +@@ -8,7 +8,7 @@ metadata: + app.kubernetes.io/name: cilium-agent + app.kubernetes.io/part-of: cilium + k8s-app: cilium +- name: cilium ++ name: cilium-kubeedge + namespace: kube-system + spec: + revisionHistoryLimit: 10 +@@ -29,6 +29,12 @@ spec: + k8s-app: cilium + spec: + affinity: ++ nodeAffinity: ++ requiredDuringSchedulingIgnoredDuringExecution: ++ nodeSelectorTerms: ++ - matchExpressions: ++ - key: node-role.kubernetes.io/edge ++ operator: Exists + podAntiAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + - labelSelector: +@@ -39,6 +45,8 @@ spec: + containers: + - args: + - --config-dir=/tmp/cilium/config-map ++ - --k8s-api-server=127.0.0.1:10550 ++ - --auto-create-cilium-node-resource=true + - --debug + command: + - cilium-agent +@@ -178,7 +186,9 @@ spec: + dnsPolicy: ClusterFirst + hostNetwork: true + initContainers: +- - command: ++ - args: ++ - --k8s-api-server=127.0.0.1:10550 ++ command: + - cilium + - build-config + env: +``` + +We can see below that `cilium-pq45v` (normal cilium-agent pod) running on cloud node, and `cilium-kubeedge-kkb7z` (edgecore specific `DaemonSet`) is running with edgecore. + +``` +> kubectl get pods -A -o wide +NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +kube-system cilium-kubeedge-kkb7z 1/1 Running 0 32s 43.135.146.155 edgemaster +kube-system cilium-operator-fdf6bc9f4-445p6 1/1 Running 0 3h40m AA.BBB.CCC.DD tomoyafujita +kube-system cilium-pq45v 1/1 Running 0 3h32m AA.BBB.CCC.DD tomoyafujita +kube-system coredns-787d4945fb-2bbdf 1/1 Running 0 8h 10.0.0.104 tomoyafujita +kube-system coredns-787d4945fb-nmd2p 1/1 Running 0 8h 10.0.0.130 tomoyafujita +kube-system etcd-tomoyafujita 1/1 Running 0 8h AA.BBB.CCC.DD tomoyafujita +kube-system kube-apiserver-tomoyafujita 1/1 Running 1 8h AA.BBB.CCC.DD tomoyafujita +kube-system kube-controller-manager-tomoyafujita 1/1 Running 0 8h AA.BBB.CCC.DD tomoyafujita +kube-system kube-proxy-qmxqp 1/1 Running 0 19m 43.135.146.155 edgemaster +kube-system kube-proxy-v2ht7 1/1 Running 0 8h AA.BBB.CCC.DD tomoyafujita +kube-system kube-scheduler-tomoyafujita 1/1 Running 1 8h AA.BBB.CCC.DD tomoyafujita +kubeedge cloudcore-df8544847-6mlw2 1/1 Running 0 4h23m AA.BBB.CCC.DD tomoyafujita +kubeedge edge-eclipse-mosquitto-9cw6r 1/1 Running 0 19m 43.135.146.155 edgemaster +``` + + + +### Check Cilium Connectivity from Pods + +Now Cilium is ready to be used for application pods and containers to provide network connectivity. +We can use busybox `DaemonSet` as following to try `ping` the network connectivity via Cilium. + +``` +> cat busybox.yaml +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: busybox +spec: + selector: + matchLabels: + app: busybox + template: + metadata: + labels: + app: busybox + spec: + containers: + - image: busybox + command: ["sleep", "3600"] + imagePullPolicy: IfNotPresent + name: busybox + +> kubectl apply -f busybox.yaml +daemonset.apps/busybox created + +> kubectl get pods -o wide +NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES +busybox-mn98w 1/1 Running 0 84s 10.0.0.58 tomoyafujita +busybox-z2mbw 1/1 Running 0 84s 10.0.1.121 edgemaster + +> kubectl exec --stdin --tty busybox-mn98w -- /bin/sh +/ # +/ # ping 10.0.1.121 +PING 10.0.1.121 (10.0.1.121): 56 data bytes +64 bytes from 10.0.1.121: seq=0 ttl=63 time=1.326 ms +64 bytes from 10.0.1.121: seq=1 ttl=63 time=1.620 ms +64 bytes from 10.0.1.121: seq=2 ttl=63 time=1.341 ms +64 bytes from 10.0.1.121: seq=3 ttl=63 time=1.685 ms +^C +--- 10.0.1.121 ping statistics --- +4 packets transmitted, 4 packets received, 0% packet loss +round-trip min/avg/max = 1.326/1.493/1.685 ms +/ # exit +> kubectl exec --stdin --tty busybox-z2mbw -- /bin/sh +/ # +/ # ping 10.0.0.58 +PING 10.0.0.58 (10.0.0.58): 56 data bytes +64 bytes from 10.0.0.58: seq=0 ttl=63 time=0.728 ms +64 bytes from 10.0.0.58: seq=1 ttl=63 time=1.178 ms +64 bytes from 10.0.0.58: seq=2 ttl=63 time=0.635 ms +64 bytes from 10.0.0.58: seq=3 ttl=63 time=1.152 ms +^C +--- 10.0.0.58 ping statistics --- +4 packets transmitted, 4 packets received, 0% packet loss +round-trip min/avg/max = 0.635/0.923/1.178 ms +``` + +Finally we can confirm the cross-communication via Cilium between `busybox` containers are just fine!!! + +If you want to know more technical insights and development, see and subscribe [KubeEdge EdgeCore supports Cilium CNI](https://github.com/kubeedge/kubeedge/issues/4844) for more details. diff --git a/blog/release-v1.10/index.mdx b/blog/release-v1.10/index.mdx index 54a63451a5..83dbe11687 100644 --- a/blog/release-v1.10/index.mdx +++ b/blog/release-v1.10/index.mdx @@ -4,9 +4,9 @@ authors: categories: - General - Announcements -date: 2023-08-25 +date: 2022-03-07 draft: false -lastmod: 2023-08-25 +lastmod: 2022-03-07 summary: KubeEdge v1.10 is live! tags: - KubeEdge diff --git a/blog/release-v1.11/index.mdx b/blog/release-v1.11/index.mdx index 33bee64caa..f72f66ab8b 100644 --- a/blog/release-v1.11/index.mdx +++ b/blog/release-v1.11/index.mdx @@ -4,9 +4,9 @@ authors: categories: - General - Announcements -date: 2023-10-25 +date: 2022-06-21 draft: false -lastmod: 2023-10-25 +lastmod: 2022-06-21 summary: KubeEdge v1.11 is live! tags: - KubeEdge diff --git a/blog/release-v1.12/index.mdx b/blog/release-v1.12/index.mdx index ec5a983122..d6b636568a 100644 --- a/blog/release-v1.12/index.mdx +++ b/blog/release-v1.12/index.mdx @@ -5,9 +5,9 @@ categories: - General - Announcements - Releases -date: 2023-05-15 +date: 2022-09-29 draft: false -lastmod: 2023-05-15 +lastmod: 2022-09-29 summary: KubeEdge v1.12 is live! tags: - KubeEdge diff --git a/blog/release-v1.13/index.mdx b/blog/release-v1.13/index.mdx index 0f79532618..b66d56de42 100644 --- a/blog/release-v1.13/index.mdx +++ b/blog/release-v1.13/index.mdx @@ -4,9 +4,9 @@ authors: categories: - General - Announcements -date: 2023-01-23 +date: 2023-01-18 draft: false -lastmod: 2023-01-23 +lastmod: 2023-01-18 summary: KubeEdge v1.13 is live! tags: - KubeEdge diff --git a/blog/release-v1.14/index.mdx b/blog/release-v1.14/index.mdx index 02d94c30b2..6ba657ffc6 100644 --- a/blog/release-v1.14/index.mdx +++ b/blog/release-v1.14/index.mdx @@ -4,9 +4,9 @@ authors: categories: - General - Announcements -date: 2023-05-15 +date: 2023-07-01 draft: false -lastmod: 2023-05-15 +lastmod: 2023-07-01 summary: KubeEdge v1.14 is live! tags: - KubeEdge diff --git a/blog/release-v1.15/index.mdx b/blog/release-v1.15/index.mdx new file mode 100644 index 0000000000..b56a7c8279 --- /dev/null +++ b/blog/release-v1.15/index.mdx @@ -0,0 +1,99 @@ +--- +authors: +- KubeEdge SIG Release +categories: +- General +- Announcements +date: 2023-10-13 +draft: false +lastmod: 2023-10-13 +summary: KubeEdge v1.15 is live! +tags: +- KubeEdge +- kubeedge +- edge computing +- kubernetes edge computing +- K8s edge orchestration +- edge computing platform +- cloud native +- iot +- iiot +- release v1.15 +- v1.15 +title: KubeEdge v1.15 is live! +--- + +On Oct 13, 2023, KubeEdge released v1.15. The new version introduces several enhanced features, significantly improving support for Windows-based edge nodes, device management, and data plane capabilities. + +## v1.15 What's New + +- [Support Windows-based Edge Nodes](#support-windows-based-edge-nodes) + +- [New v1beta1 version of Device API](#new-v1beta1-version-of-device-api) + +- [Support Alpha version of DMI DatePlane and Mapper-Framework](#support-alpha-version-of-dmi-dateplane-and-mapper-framework) + +- [Support Kubernetes native Static Pod on Edge Nodes](#support-kubernetes-native-static-pod-on-edge-nodes) + +- [Support more Kubernetes Native Plugin Running on Edge Node](#support-more-kubernetes-native-plugin-running-on-edge-node) + +- [Upgrade Kubernetes Dependency to v1.26.7](#upgrade-kubernetes-dependency-to-v1267) + +## Release Highlights + +### Support Windows-based Edge Nodes + +Edge computing involves various types of devices, including sensors, cameras, and industrial control devices, some of which may run on the Windows OS. In order to support these devices and use cases, supporting Windows Server nodes is necessary for KubeEdge. + +In this release, KubeEdge supports the edge node running on Windows Server 2019, and supports Windows container running on edge node, thereby extending KubeEdge to the Windows ecosystem and expanding its use cases and ecosystem. + +Refer to the link for more details. ([#4914](https://github.com/kubeedge/kubeedge/pull/4914), [#4967](https://github.com/kubeedge/kubeedge/pull/4967)) + +### New v1beta1 version of Device API + +The device API is updated from `v1alpha2` to `v1beta1`, in v1beta1 API updates include: + +- The built-in protocols incude Modbus, Opc-UA and Bluetooth are removed in device instance, and the built-in mappers for these proytocols still exists and will be maintained and updated to latest verison. + +- Users must define the protocol config through `CustomizedValue` in `ProtocolConfig`. + +- DMI date plane related fields are added, users can config the collection and reporting frequency of device data, and the destination to whcih(such as database, httpserver) data is pushed. + +- Controls whether to report device data to cloud. + +Refer to the link for more details. ([#4983](https://github.com/kubeedge/kubeedge/pull/4983)) + +### Support Alpha version of DMI DatePlane and Mapper-Framework + +Alpha version of DMI date plane is supported, DMI date plane is mainly implemented in mapper, providing interface for pushing data, pulling data, and storing data in database. + +To make writing mapper easier, a mapper development framework subproject **Mapper-Framework** is provided in this release. Mapper-Framework provides mapper runtime libs and tools for scaffolding and code generation to bootstrap a new mapper project. Users only need to run a command `make generate` to generate a mapper project, then add protocol related code to mapper. + +Refer to the link for more details. ([#5023](https://github.com/kubeedge/kubeedge/pull/5023)) + +### Support Kubernetes native Static Pod on Edge Nodes +Kubernetes native `Static Pod` is supported on edge node in this release. Users can create pods on edge nodes by place pod manifests in `/etc/kubeedge/manifests`, same as that on the Kubernetes node. + +Refer to the link for more details. ([#4825](https://github.com/kubeedge/kubeedge/pull/4825)) + +### Support more Kubernetes Native Plugin Running on Edge Node + +Kubernetes non-resource kind request `/version` is supported from edge node, users now can do `/version` requests in edge node from metaserver. In addition, it can easily support other non-resource kind of requests like `/healthz` in edge node with the curent framework. Many kubernetes plugins like cilium/calico which depend on these non-resource kind of requests, now can run on edge nodes. + +Refer to the link for more details. ([#4904](https://github.com/kubeedge/kubeedge/pull/4904)) + +### Upgrade Kubernetes Dependency to v1.26.7 + +Upgrade the vendered kubernetes version to v1.26.7, users are now able to use the feature of new version on the cloud and on the edge side. + +Refer to the link for more details. ([#4929](https://github.com/kubeedge/kubeedge/pull/4929)) + +## Important Steps before Upgrading + +- In KubeEdge v1.15, new v1beta1 version of device API is incompatible with earlier versions of v1alpha1, users need to update the device API yamls to v1bata1 if you want to use v1.15. + +- In KubeEdge v1.15, users need to upgrade the containerd to v1.6.0 or later. Containerd minor version 1.5 and older will not be supported in KubeEdge v1.15. +Ref: https://kubernetes.io/blog/2022/11/18/upcoming-changes-in-kubernetes-1-26/#cri-api-removal + +- In KubeEdge v1.14, EdgeCore has removed the dockeshim support, so users can only use `remote` type runtime, and uses `containerd` runtime by default. If you want to use `docker` runtime in v1.15, you also need to first set `edged.containerRuntime=remote` and corresponding docker configuration like `RemoteRuntimeEndpoint` and `RemoteImageEndpoint` in EdgeCore, then install the cri-dockerd tools as docs below: +https://github.com/kubeedge/kubeedge/issues/4843 \ No newline at end of file diff --git a/docs/advanced/inclusterconfig.md b/docs/advanced/inclusterconfig.md new file mode 100644 index 0000000000..178a6eeee0 --- /dev/null +++ b/docs/advanced/inclusterconfig.md @@ -0,0 +1,91 @@ +--- +title: Edge pods use in-cluster config to access Kube-APIServer +sidebar_position: 7 +--- + +## Abstract + +In edge scenario, edge and cloud are in different network environment typically, so edge pods cannot access Kube-APIServer through in-cluster config directly. When you deploy edge pods which need to use `in-cluster config` mode, it will fail and the error message is as below: + +``` +unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined +``` + +From KubeEdge v1.17.0, KubeEdge supports edge pods using `in-cluster config` mechanism to access Kube-APIServer. If you need to use `in-cluster config` feature, you need to enable `metaServer` and turn on `requireAuthorization` featureGate. The steps to perform the operation will be described in detail below. + +Please ref to [support in-cluster config proposal](https://github.com/kubeedge/kubeedge/blob/master/docs/proposals/inclusterconfig.md) to get more details about the design of this feature. + + +## Getting Started + +### Cloud + +When using `keadm init` to deploy CloudCore, please enable `dynamiccontroller` module and `requireAuthorization` featureGate: + +``` +keadm init --advertise-address="THE-EXPOSED-IP" --kubeedge-version=v1.17.0 --set cloudCore.modules.dynamicController.enable=true cloudCore.featureGates.requireAuthorization=true +``` + +If you have already deployed CloudCore without set `dynamiccontroller` and featureGate, you can modify the configuration following these steps: + +1. Modify CloudCore configuration + + Execute `kubectl edit cm cloudcore -nkubeedge` and then set `featureGates.requireAuthorization=true` and `dynamiccontroller.enable=true` + +```yaml +apiVersion: v1 +data: + cloudcore.yaml: | + apiVersion: cloudcore.config.kubeedge.io/v1alpha2 + ... + featureGates: + requireAuthorization: true + modules: + ... + dynamicController: + enable: true + ... +``` + +2. Create clusterrole for this feature. + + The clusterrole required for this feature can be referenced from the file [rbac_cloudcore_requireAuthorization](https://github.com/kubeedge/kubeedge/blob/master/manifests/charts/cloudcore/templates/rbac_cloudcore_feature.yaml). Add these clusterrole in the cluster. + +3. Restart CloudCore Pod. + +### Edge + +1. Install EdgeCore first + +2. Modify EdgeCore configuration. + + Execute `vi /etc/kubeedge/config/edgecore.yaml` and then set `featureGates.requireAuthorization=true` and `metaServer.enable=true` + +```yaml +apiVersion: edgecore.config.kubeedge.io/v1alpha2 +... +kind: EdgeCore +featureGates: + requireAuthorization: true +modules: +... +metaServer: + enable: false +... +``` + +Save these modification and then execute `sudo systemctl restart edgecore.service` to restart EdgeCore. + +### Deploy your edge pods + +After set CloudCore and EdgeCore successfully, you can deploy your edge pods which need `in-cluster config` mode to access Kube-APIServer. + +:::note +If pods unable to access the Kube-APIServer normally, and the error message is as follows: + +``` +User "system:xxx:kubeedge:cloudcore" cannot create resource "xxx" in API group "xxx.k8s.io" at the cluster scope +``` + +It indicates that your pod requires additional `clusterrole` permissions. You can manually configure the corresponding `clusterrole` on the cloud side. +::: diff --git a/docs/setup/install-with-keadm.md b/docs/setup/install-with-keadm.md index 8b585bf53a..d824037e17 100644 --- a/docs/setup/install-with-keadm.md +++ b/docs/setup/install-with-keadm.md @@ -2,51 +2,54 @@ title: Installing KubeEdge with Keadm sidebar_position: 3 --- -Keadm is used to install the cloud and edge components of KubeEdge. It is not responsible for installing K8s and runtime. -Please refer [kubernetes-compatibility](https://github.com/kubeedge/kubeedge#kubernetes-compatibility) to get **Kubernetes compatibility** and determine what version of Kubernetes would be installed. +Keadm is used to install the cloud and edge components of KubeEdge. It does not handle the installation of Kubernetes and its [runtime environment](https://kubeedge.io/docs/setup/prerequisites/runtime). -## Limitation +Please refer to [Kubernetes compatibility](https://github.com/kubeedge/kubeedge#kubernetes-compatibility) documentation to check **Kubernetes compatibility** and ascertain the Kubernetes version to be installed. -- Need super user rights (or root rights) to run. +## Prerequisite +- It Requires super user rights (or root rights) to run. -## Install keadm +## Install Keadm -There're three ways to download a `keadm` binary +There're three ways to download the `keadm` binary: -- Download from [github release](https://github.com/kubeedge/kubeedge/releases). +1. Download from [GitHub release](https://github.com/kubeedge/kubeedge/releases). - Now KubeEdge github officially holds three arch releases: amd64, arm, arm64. Please download the right arch package according to your platform, with your expected version. + KubeEdge GitHub officially holds three architecture releases: amd64, arm, and arm64. Please download the correct package according to your platform and desired version. + ```shell - wget https://github.com/kubeedge/kubeedge/releases/download/v1.12.1/keadm-v1.12.1-linux-amd64.tar.gz - tar -zxvf keadm-v1.12.1-linux-amd64.tar.gz - cp keadm-v1.12.1-linux-amd64/keadm/keadm /usr/local/bin/keadm + wget https://github.com/kubeedge/kubeedge/releases/download/v1.17.0/keadm-v1.17.0-linux-amd64.tar.gz + tar -zxvf keadm-v1.17.0-linux-amd64.tar.gz + cp keadm-1.17.0-linux-amd64/keadm/keadm /usr/local/bin/keadm ``` -- Download from dockerhub KubeEdge official release image. + +2. Download from the official KubeEdge release image on Docker Hub. ```shell - docker run --rm kubeedge/installation-package:v1.12.1 cat /usr/local/bin/keadm > /usr/local/bin/keadm && chmod +x /usr/local/bin/keadm + docker run --rm kubeedge/installation-package:v1.17.0 cat /usr/local/bin/keadm > /usr/local/bin/keadm && chmod +x /usr/local/bin/keadm ``` -- Build from source +3. Build from Source - ref: [build from source](./install-with-binary#build-from-source) - +- Refer to [build from source](./install-with-binary#build-from-source) for instructions. ## Setup Cloud Side (KubeEdge Master Node) -By default ports `10000` and `10002` in your cloudcore needs to be accessible for your edge nodes. +By default, ports `10000` and `10002` on your CloudCore needs to be accessible for your edge nodes. + +**IMPORTANT NOTES:** -**IMPORTANT NOTE:** +1. At least one of `kubeconfig` or `master` must be configured correctly to verify the version and other information of the Kubernetes cluster. -1. At least one of kubeconfig or master must be configured correctly, so that it can be used to verify the version and other info of the k8s cluster. -2. Please make sure edge node can connect cloud node using local IP of cloud node, or you need to specify public IP of cloud node with `--advertise-address` flag. -3. `--advertise-address` is the address exposed by the cloud side (will be added to the SANs of the CloudCore certificate), the default value is the local IP. +2. Ensure the edge node can connect to the cloud node using the local IP of cloud node, or specify the public IP of the cloud node with the `--advertise-address` flag. + +3. `--advertise-address` is the address exposed by the cloud side (it will be added to the SANs of the CloudCore certificate). The default value is the local IP. ### keadm init -`keadm init` provides a solution for integrating Cloudcore helm chart. Cloudcore will be deployed to cloud nodes in container mode. +`keadm init` provides a solution for integrating the CloudCore Helm chart. CloudCore will be deployed to cloud nodes in container mode. Example: @@ -55,6 +58,7 @@ keadm init --advertise-address="THE-EXPOSED-IP" --profile version=v1.12.1 --kube ``` Output: + ```shell Kubernetes version verification passed, KubeEdge installation will start... CLOUDCORE started @@ -66,7 +70,8 @@ STATUS: deployed REVISION: 1 ``` -You can run `kubectl get all -n kubeedge` to ensure that cloudcore start successfully just like below. +You can run `kubectl get all -n kubeedge` to ensure that CloudCore start successfully, as shown below. + ```shell # kubectl get all -n kubeedge NAME READY STATUS RESTARTS AGE @@ -82,11 +87,13 @@ NAME DESIRED CURRENT READY AGE replicaset.apps/cloudcore-56b8454784 1 1 1 46s ``` -**IMPORTANT NOTE:** +**IMPORTANT NOTES:** + +1. Set flags `--set key=value` for CloudCore helm chart could refer to [KubeEdge CloudCore Helm Charts README.md](https://github.com/kubeedge/kubeedge/blob/master/manifests/charts/cloudcore/README.md). -1. Set flags `--set key=value` for cloudcore helm chart could refer to [KubeEdge Cloudcore Helm Charts README.md](https://github.com/kubeedge/kubeedge/blob/master/manifests/charts/cloudcore/README.md). 2. You can start with one of Keadm’s built-in configuration profiles and then further customize the configuration for your specific needs. Currently, the built-in configuration profile keyword is `version`. Refer to [version.yaml](https://github.com/kubeedge/kubeedge/blob/master/manifests/profiles/version.yaml) as `values.yaml`, you can make your custom values file here, and add flags like `--profile version=v1.9.0 --set key=value` to use this profile. `--external-helm-root` flag provides a feature function to install the external helm charts like edgemesh. -3. `keadm init` deploy cloudcore in container mode, if you want to deploy cloudcore as binary, please ref [`keadm deprecated init`](#keadm-deprecated-init) below. + +3. `keadm init` by default, deploys CloudCore in container mode. If you want to deploy CloudCore as a binary, please refer to [`keadm deprecated init`](#keadm-deprecated-init). Example: @@ -94,7 +101,7 @@ Example: keadm init --set server.advertiseAddress="THE-EXPOSED-IP" --set server.nodeName=allinone --kube-config=/root/.kube/config --force --external-helm-root=/root/go/src/github.com/edgemesh/build/helm --profile=edgemesh ``` -If you are familiar with the helm chart installation, please refer to [KubeEdge Helm Charts](https://github.com/kubeedge/kubeedge/tree/master/manifests/charts). +If you are familiar with the Helm chart installation, please refer to [KubeEdge Helm Charts](https://github.com/kubeedge/kubeedge/tree/master/manifests/charts). **SPECIAL SCENARIO:** @@ -109,24 +116,27 @@ To handle kube-proxy, you can refer to the [two methods](#anchor-name) mentioned ### keadm manifest generate -You can also get the manifests with `keadm manifest generate`. +You can generate the manifests using `keadm manifest generate`. Example: ```shell keadm manifest generate --advertise-address="THE-EXPOSED-IP" --kube-config=/root/.kube/config > kubeedge-cloudcore.yaml ``` -> Add --skip-crds flag to skip outputing the CRDs + +> Add `--skip-crds` flag to skip outputting the CRDs. ### keadm deprecated init -`keadm deprecated init` will install cloudcore in binary process, generate the certs and install the CRDs. It also provides a flag by which a specific version can be set. +`keadm deprecated init` installs CloudCore in binary process, generates certificates, and installs the CRDs. It also provides a flag to set a specific version. -**IMPORTANT NOTE:** +**IMPORTANT NOTES:** -1. At least one of kubeconfig or master must be configured correctly, so that it can be used to verify the version and other info of the k8s cluster. -2. Please make sure edge node can connect cloud node using local IP of cloud node, or you need to specify public IP of cloud node with `--advertise-address` flag. -3. `--advertise-address` is the address exposed by the cloud side (will be added to the SANs of the CloudCore certificate), the default value is the local IP. +1. At least one of `kubeconfig` or `master` must be configured correctly to verify the version and other information of the Kubernetes cluster. + +2. Ensure the edge node can connect to the cloud node using the local IP of cloud node, or specify the public IP of the cloud node with the `--advertise-address` flag. + +3. `--advertise-address` is the address exposed by the cloud side (it will be added to the SANs of the CloudCore certificate). The default value is the local IP. Example: ```shell @@ -141,7 +151,8 @@ keadm manifest generate --advertise-address="THE-EXPOSED-IP" --kube-config=/root CloudCore started ``` - You can run `ps -elf | grep cloudcore` command to ensure that cloudcore is running successfully. + You can run the `ps -elf | grep cloudcore` command to ensure that Cloudcore is running successfully. + ```shell # ps -elf | grep cloudcore 0 S root 2736434 1 1 80 0 - 336281 futex_ 11:02 pts/2 00:00:00 /usr/local/bin/cloudcore @@ -152,7 +163,7 @@ keadm manifest generate --advertise-address="THE-EXPOSED-IP" --kube-config=/root ### Get Token From Cloud Side -Run `keadm gettoken` in **cloud side** will return the token, which will be used when joining edge nodes. +Run `keadm gettoken` on the **cloud side** to retrieve the token, which will be used when joining edge nodes. ```shell # keadm gettoken @@ -162,7 +173,8 @@ Run `keadm gettoken` in **cloud side** will return the token, which will be used ### Join Edge Node #### keadm join -`keadm join` will install edgecore. It also provides a flag by which a specific version can be set. It will pull image [kubeedge/installation-package](https://hub.docker.com/r/kubeedge/installation-package) from dockerhub and copy binary `edgecore` from container to hostpath, and then start `edgecore` as a system service. + +`keadm join` installs EdgeCore. It also provides a flag to set a specific version. It pulls the image [kubeedge/installation-package](https://hub.docker.com/r/kubeedge/installation-package) from Docker Hub, copies the `edgecore` binary from container to the hostpath, and then starts `edgecore` as a system service. Example: @@ -170,10 +182,13 @@ Example: keadm join --cloudcore-ipport="THE-EXPOSED-IP":10000 --token=27a37ef16159f7d3be8fae95d588b79b3adaaf92727b72659eb89758c66ffda2.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTAyMTYwNzd9.JBj8LLYWXwbbvHKffJBpPd5CyxqapRQYDIXtFZErgYE --kubeedge-version=v1.12.1 ``` -**IMPORTANT NOTE:** -1. `--cloudcore-ipport` flag is a mandatory flag. -2. If you want to apply certificate for edge node automatically, `--token` is needed. -3. The kubeEdge version used in cloud and edge side should be same. +**IMPORTANT NOTES:** + +1. The `--cloudcore-ipport` flag is mandatory. + +2. If you want to apply certificate for the edge node automatically, the `--token` is needed. + +3. The KubeEdge version used on the cloud and edge sides should be the same. Output: @@ -182,7 +197,8 @@ Output: KubeEdge edgecore is running, For logs visit: journalctl -u edgecore.service -xe ``` -you can run `systemctl status edgecore` command to ensure edgecore is running successfully +You can run the `systemctl status edgecore` command to ensure EdgeCore is running successfully: + ```shell # systemctl status edgecore ● edgecore.service @@ -195,14 +211,17 @@ you can run `systemctl status edgecore` command to ensure edgecore is running su ``` #### keadm deprecated join -You can also use `keadm deprecated join` to start edgecore from release pacakge. It will download release packages from [KubeEdge release website](https://github.com/kubeedge/kubeedge/releases), and then start `edgecore` in binary progress. + +You can also use `keadm deprecated join` to start EdgeCore from the release pacakge. It will download release packages from [KubeEdge release website](https://github.com/kubeedge/kubeedge/releases), and then start `edgecore` in binary progress. Example: + ```shell keadm deprecated join --cloudcore-ipport="THE-EXPOSED-IP":10000 --token=27a37ef16159f7d3be8fae95d588b79b3adaaf92727b72659eb89758c66ffda2.eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJleHAiOjE1OTAyMTYwNzd9.JBj8LLYWXwbbvHKffJBpPd5CyxqapRQYDIXtFZErgYE --kubeedge-version=1.12.0 ``` Output: + ```shell MQTT is installed in this host ... @@ -210,59 +229,63 @@ KubeEdge edgecore is running, For logs visit: journalctl -u edgecore.service -xe ``` ### Deploy demo on edge nodes -ref: [Deploy demo on edge nodes](./install-with-binary#deploy-demo-on-edge-nodes) + +Refer to the [Deploy demo on edge nodes](./install-with-binary#deploy-demo-on-edge-nodes) documentation. ### Enable `kubectl logs` Feature -Before deploying metrics-server , `kubectl logs` feature must be activated: +Before deploying the metrics-server, the `kubectl logs` feature must be activated: -> Note that if cloudcore is deployed using helm: -> - The stream certs are generated automatically and cloudStream feature is enabled by default. So, step 1-3 could - be skipped unless customization is needed. -> - Also, step 4 could be finished by iptablesmanager component by default, manually operations are not needed. - Refer to the [cloudcore helm values](https://github.com/kubeedge/kubeedge/blob/master/manifests/charts/cloudcore/values.yaml#L67). -> - Operations in step 5-6 related to cloudcore could also be skipped. +> Note for Helm deployments: +> - Stream certificates are generated automatically and the CloudStream feature is enabled by default. Therefore, Steps 1-3 can be skipped unless customization is needed. +> - Step 4 could be finished by iptablesmanager component by default, so manual operations are not needed. Refer to the [cloudcore helm values](https://github.com/kubeedge/kubeedge/blob/master/manifests/charts/cloudcore/values.yaml#L67). +> - Operations in Steps 5-6 related to CloudCore can also be skipped. -1. Make sure you can find the kubernetes `ca.crt` and `ca.key` files. If you set up your kubernetes cluster by `kubeadm` , those files will be in `/etc/kubernetes/pki/` dir. +1. Ensure you can locate the Kubernetes `ca.crt` and `ca.key` files. If you set up your Kubernetes cluster with `kubeadm`, these files will be in the `/etc/kubernetes/pki/` directory. ``` shell ls /etc/kubernetes/pki/ ``` -2. Set `CLOUDCOREIPS` env. The environment variable is set to specify the IP address of cloudcore, or a VIP if you have a highly available cluster. - Set `CLOUDCORE_DOMAINS` instead if Kubernetes uses domain names to communicate with cloudcore. +2. Set the `CLOUDCOREIPS` environment variable to specify the IP address of CloudCore, or a VIP if you have a highly available cluster. Set `CLOUDCORE_DOMAINS` instead if Kubernetes uses domain names to communicate with CloudCore. ```bash export CLOUDCOREIPS="192.168.0.139" ``` - (Warning: the same **terminal** is essential to continue the work, or it is necessary to type this command again.) Checking the environment variable with the following command: + + (Warning: the same **terminal** is essential to continue the work, or it is necessary to type this command again). You can check the environment variable with the following command: + ``` shell echo $CLOUDCOREIPS ``` -3. Generate the certificates for **CloudStream** on cloud node, however, the generation file is not in the `/etc/kubeedge/`, we need to copy it from the repository which was git cloned from GitHub. - Change user to root: +3. Generate the certificates for **CloudStream** on the cloud node. The generation file is not in `/etc/kubeedge/`, so it needs to be copied from the repository cloned from GitHub. Switch to the root user: + ```shell sudo su ``` - Copy certificates generation file from original cloned repository: + + Copy the certificate generation file from the original cloned repository: + ```shell cp $GOPATH/src/github.com/kubeedge/kubeedge/build/tools/certgen.sh /etc/kubeedge/ ``` + Change directory to the kubeedge directory: + ```shell cd /etc/kubeedge/ ``` + Generate certificates from **certgen.sh** ```bash /etc/kubeedge/certgen.sh stream ``` -4. It is needed to set iptables on the host. (This command should be executed on every apiserver deployed node.)(In this case, this the master node, and execute this command by root.) - Run the following command on the host on which each apiserver runs: +4. It is needed to set iptables on the host. (This command should be executed on every apiserver deployed node.)(In this case, this the master node, and execute this command by root.) Run the following command on the host where each apiserver runs: - **Note:** You need to get the configmap first, which contains all the cloudcore ips and tunnel ports. + **Note:** First, get the configmap containing all the CloudCore IPs and tunnel ports: ```bash kubectl get cm tunnelport -nkubeedge -oyaml @@ -276,7 +299,7 @@ Before deploying metrics-server , `kubectl logs` feature must be activated: ... ``` - Then set all the iptables for multi cloudcore instances to every node that apiserver runs. The cloudcore ips and tunnel ports should be get from configmap above. + Then set all the iptables for multi CloudCore instances to every node that apiserver runs. The cloudcore ips and tunnel ports should be obtained from the configmap above. ```bash iptables -t nat -A OUTPUT -p tcp --dport $YOUR-TUNNEL-PORT -j DNAT --to $YOUR-CLOUDCORE-IP:10003 @@ -284,22 +307,24 @@ Before deploying metrics-server , `kubectl logs` feature must be activated: iptables -t nat -A OUTPUT -p tcp --dport 10351 -j DNAT --to 192.168.1.17:10003 ``` - If you are not sure if you have setting of iptables, and you want to clean all of them. - (If you set up iptables wrongly, it will block you out of your `kubectl logs` feature) + If you are unsure about the current iptables settings and want to clean all of them. (If you set up iptables wrongly, it will block you out of your `kubectl logs` feature) + The following command can be used to clean up iptables: + ``` shell iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X ``` - 5. Modify **both** `/etc/kubeedge/config/cloudcore.yaml` and `/etc/kubeedge/config/edgecore.yaml` on cloudcore and edgecore. Set up **cloudStream** and **edgeStream** to `enable: true`. Change the server IP to the cloudcore IP (the same as $CLOUDCOREIPS). - Open the YAML file in cloudcore: + Open the YAML file in CloudCore: + ```shell sudo nano /etc/kubeedge/config/cloudcore.yaml ``` Modify the file in the following part (`enable: true`): + ```yaml cloudStream: enable: true @@ -313,11 +338,14 @@ Before deploying metrics-server , `kubectl logs` feature must be activated: tunnelPort: 10004 ``` - Open the YAML file in edgecore: + Open the YAML file in EdgeCore: + ``` shell sudo nano /etc/kubeedge/config/edgecore.yaml ``` + Modify the file in the following part (`enable: true`), (`server: 192.168.0.193:10004`): + ``` yaml edgeStream: enable: true @@ -330,29 +358,37 @@ Before deploying metrics-server , `kubectl logs` feature must be activated: writeDeadline: 15 ``` -6. Restart all the cloudcore and edgecore. +6. Restart all the CloudCore and EdgeCore. ``` shell sudo su ``` - cloudCore in process mode: + + If CloudCore is running in process mode: + ``` shell pkill cloudcore nohup cloudcore > cloudcore.log 2>&1 & ``` - or cloudCore in kubernetes deployment mode: + + If CloudCore is running in Kubernetes deployment mode: + ``` shell kubectl -n kubeedge rollout restart deployment cloudcore ``` - edgeCore: + + EdgeCore: + ``` shell systemctl restart edgecore.service ``` - If you fail to restart edgecore, check if that is because of `kube-proxy` and kill it. **kubeedge** reject it by default, we use a succedaneum called [edgemesh](https://github.com/kubeedge/kubeedge/blob/master/docs/proposals/edgemesh-design.md) + **Note:** the importance is to avoid `kube-proxy` being deployed on edgenode. There are two methods to solve it: + - 1. Add the following settings by calling `kubectl edit daemonsets.apps -n kube-system kube-proxy`: + - **Method 1:** Add the following settings by calling `kubectl edit daemonsets.apps -n kube-system kube-proxy`: + ``` yaml spec: template: @@ -365,24 +401,26 @@ Before deploying metrics-server , `kubectl logs` feature must be activated: - key: node-role.kubernetes.io/edge operator: DoesNotExist ``` - or just run the below command directly in the shell window: + + or just run the following command directly in the shell window: + ```shell kubectl patch daemonset kube-proxy -n kube-system -p '{"spec": {"template": {"spec": {"affinity": {"nodeAffinity": {"requiredDuringSchedulingIgnoredDuringExecution": {"nodeSelectorTerms": [{"matchExpressions": [{"key": "node-role.kubernetes.io/edge", "operator": "DoesNotExist"}]}]}}}}}}}' ``` - 2. If you still want to run `kube-proxy`, ask **edgecore** not to check the environment by adding the env variable in `edgecore.service` : + - **Method 2:** If you still want to run `kube-proxy`, instruct **edgecore** not to check the environment by adding the environment variable in `edgecore.service` : ``` shell sudo vi /etc/kubeedge/edgecore.service ``` - - Add the following line into the **edgecore.service** file: + Add the following line into the **edgecore.service** file: ``` shell Environment="CHECK_EDGECORE_ENVIRONMENT=false" ``` - - The final file should look like this: + The final file should look like this: ``` Description=edgecore.service @@ -397,6 +435,7 @@ Before deploying metrics-server , `kubectl logs` feature must be activated: ``` ### Support Metrics-server in Cloud + 1. The realization of this function point reuses cloudstream and edgestream modules. So you also need to perform all steps of *Enable `kubectl logs` Feature*. 2. Since the kubelet ports of edge nodes and cloud nodes are not the same, the current release version of metrics-server(0.3.x) does not support automatic port identification (It is the 0.4.0 feature), so you need to manually compile the image from master branch yourself now. @@ -442,7 +481,7 @@ Before deploying metrics-server , `kubectl logs` feature must be activated: ``` iptables -t nat -A OUTPUT -p tcp --dport 10350 -j DNAT --to $CLOUDCOREIPS:10003 ``` - (To direct the request for metric-data from edgecore:10250 through tunnel between cloudcore and edgecore, the iptables is vitally important.) + (To direct the request for metric-data from edgecore:10250 through tunnel between CloudCore and EdgeCore, the iptables is vitally important.) Before you deploy metrics-server, you have to make sure that you deploy it on the node which has apiserver deployed on. In this case, that is the master node. As a consequence, it is needed to make master node schedulable by the following command: @@ -468,7 +507,8 @@ Before deploying metrics-server , `kubectl logs` feature must be activated: - charlie-latest ``` -**IMPORTANT NOTE:** +**IMPORTANT NOTES:** + 1. Metrics-server needs to use hostnetwork network mode. 2. Use the image compiled by yourself and set imagePullPolicy to Never. @@ -517,4 +557,5 @@ It provides a flag for users to specify kubeconfig path, the default path is `/r ``` ### Node -`keadm reset` or `keadm deprecated reset` will stop `edgecore` and it doesn't uninstall/remove any of the pre-requisites. + +`keadm reset` or `keadm deprecated reset` will stop `edgecore` and it doesn't uninstall/remove any of the pre-requisites. \ No newline at end of file diff --git a/docusaurus.config.js b/docusaurus.config.js index 2fbefafa79..b3b1a7b25b 100644 --- a/docusaurus.config.js +++ b/docusaurus.config.js @@ -136,7 +136,7 @@ const config = { logo: { src: "img/avatar.png", target: "_self", - href: "https://kubeedge.io", + href: "/", }, items: [ { diff --git a/i18n/zh/docusaurus-plugin-content-blog/authors.yml b/i18n/zh/docusaurus-plugin-content-blog/authors.yml index 070c2ddb95..1b48f26b3e 100644 --- a/i18n/zh/docusaurus-plugin-content-blog/authors.yml +++ b/i18n/zh/docusaurus-plugin-content-blog/authors.yml @@ -52,4 +52,9 @@ Trilok Geer: KubeEdge SIG Release: name: KubeEdge SIG Release url: https://github.com/kubeedge-bot - image_url: https://avatars.githubusercontent.com/u/48982446?v=4 \ No newline at end of file + image_url: https://avatars.githubusercontent.com/u/48982446?v=4 + +Tomoya Fujita: + name: Tomoya Fujita + url: https://github.com/fujitatomoya + image_url: https://avatars.githubusercontent.com/u/43395114?v=4 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-docs/current/advanced/inclusterconfig.md b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/inclusterconfig.md new file mode 100644 index 0000000000..e8e42f25ef --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-docs/current/advanced/inclusterconfig.md @@ -0,0 +1,91 @@ +--- +title: 边缘pods使用in-cluster config访问Kube-APIServer +sidebar_position: 7 +--- + +## 概要 + +在边缘场景中,边缘端和云端通常处于不同的网络环境,因此边缘 Pod 无法直接通过 in-cluster config访问 Kube-APIServer。当您部署的边缘pods需要使用`in-cluster config`, pod日志会出现类似如下报错: + +``` +unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined +``` + +从KubeEdge v1.17.0起, KubeEdge开始支持边缘pods使用`in-cluster config`机制访问Kube-APIServer. 如果您需要使用该特性,请参考下面的步骤。 + +您也可以参考[in-cluster config特性proposal](https://github.com/kubeedge/kubeedge/blob/master/docs/proposals/inclusterconfig.md) 来了解关于该特性的设计与实现. + + +## 操作步骤 + +### 云端 + +当您使用`keadm init`来安装CloudCore时,请打开`dynamiccontroller`模块以及`requireAuthorization`特性开关: + +``` +keadm init --advertise-address="THE-EXPOSED-IP" --kubeedge-version=v1.17.0 --set cloudCore.modules.dynamicController.enable=true cloudCore.featureGates.requireAuthorization=true +``` + +如果您已经安装过CloudCore,并且在安装时没有配置`dynamiccontroller`模块以及`requireAuthorization`特性开关,请按照以下步骤修改配置: + +1. 修改CloudCore配置 + + 执行`kubectl edit cm cloudcore -nkubeedge`并配置`featureGates.requireAuthorization=true`以下`dynamiccontroller.enable=true`: + +```yaml +apiVersion: v1 +data: + cloudcore.yaml: | + apiVersion: cloudcore.config.kubeedge.io/v1alpha2 + ... + featureGates: + requireAuthorization: true + modules: + ... + dynamicController: + enable: true + ... +``` + +2. 创建该特性相关的clusterrole + + 特性相关的clusterrole请参考[rbac_cloudcore_requireAuthorization](https://github.com/kubeedge/kubeedge/blob/master/manifests/charts/cloudcore/templates/rbac_cloudcore_feature.yaml)。 在集群中添加参考文档中的clusterrole。 + +3. 重启CloudCore的pod + +### 边缘端 + +1. 请先安装好EdgeCore + +2. 修改EdgeCore配置 + + 执行`vi /etc/kubeedge/config/edgecore.yaml`,配置 `featureGates.requireAuthorization=true` 以及 `metaServer.enable=true` + +```yaml +apiVersion: edgecore.config.kubeedge.io/v1alpha2 +... +kind: EdgeCore +featureGates: + requireAuthorization: true +modules: +... +metaServer: + enable: false +... +``` + +保存以上修改并执行`sudo systemctl restart edgecore.service`来重启EdgeCore. + +### 部署您的边缘应用 + +当CloudCore和EdgeCore配置完成后,您就可以部署边缘应用,并通过`in-cluster config`机制访问Kube-APIServer了。 + +:::note +如果您的pods无法直接访问Kube-APIServer,并且错误信息类似如下权限错误: + +``` +User "system:xxx:kubeedge:cloudcore" cannot create resource "xxx" in API group "xxx.k8s.io" at the cluster scope +``` + +说明该pod需要额外的`clusterrole`权限,您可以手动在集群中添加对应的`clusterrole`。 +::: diff --git a/i18n/zh/docusaurus-plugin-content-pages/case-studies/Raisecom-Tech/index.mdx b/i18n/zh/docusaurus-plugin-content-pages/case-studies/Raisecom-Tech/index.mdx new file mode 100644 index 0000000000..f6335637c0 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-pages/case-studies/Raisecom-Tech/index.mdx @@ -0,0 +1,26 @@ +--- +date: 2024-05-27 +title: 瑞斯康达科技股份有限公司 +subTitle: +description: 采用KubeEdge作为智能监控方案实施的重要组成部分,有效完成了对工厂安全的AI监控,减少了安全事故的发生,提高了工厂的生产效率。 +tags: + - 用户案例 +--- + +# 基于KubeEdge的智能监控方案 + +## 挑战 + +保障工业生产安全是瑞斯康达制造工厂的重要需求,传统工人的生产安全检测方式采用人工方式,速度慢、效率低,工人不遵守安全要求的情况仍时有发生,且容易被忽视,具有很大的安全隐患,影响工厂的生产效率。 + +## 解决方案 + +开发基于人工智能算法的工业智能监控应用,以取代人工监控。但仅有智能监控应用是不够的,智能边缘应用的部署和管理、云端训练与边缘推理的协同等新问题也随之出现,成为该解决方案在工业生产环境中大规模应用的瓶颈。 + +中国电信研究院将KubeEdge作为智能监控方案实施的重要组成部分,帮助瑞斯康达科技解决该问题。中国电信研究院架构师Xiaohou Shi完成了该方案的设计。该案例通过工业视觉应用,结合深度学习算法,实时监控工厂工人的安全状态。引入KubeEdge作为边缘计算平台,用于管理边缘设备和智能监控应用的运行环境。通过KubeEdge,可以在云端对监控模型进行训练,并自动部署到边缘节点进行推理执行,提高运营效率,降低运维成本。 + +## 优势 + +在此应用场景中,KubeEdge完成了边缘应用的统一管理,同时KubeEdge还可以充分利用云边协同的优势,借助KubeEdge作为边缘计算平台,有效完成了对工厂安全的AI监控,减少了安全事故的发生,提高了工厂的生产效率。 + +基于此成功案例,未来将在KubeEdge上部署更多深度学习算法,解决边缘计算方面的问题,未来也将与KubeEdge开展更多场景化工业智能应用的合作。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-pages/case-studies/XingHai/index.mdx b/i18n/zh/docusaurus-plugin-content-pages/case-studies/XingHai/index.mdx new file mode 100644 index 0000000000..6aa505e6fe --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-pages/case-studies/XingHai/index.mdx @@ -0,0 +1,30 @@ +--- +date: 2024-05-27 +title: 兴海物联科技有限公司 +subTitle: +description: 兴海物联采用KubeEdge构建了云边端协同的智慧校园,大幅提升了校园管理效率。 +tags: + - 用户案例 +--- + +# 基于KubeEdge构建智慧校园 + +## 挑战 + +兴海物联是一家利用建筑物联网平台、智能硬件、人工智能等技术,提供智慧楼宇综合解决方案的物联网企业,是中海物业智慧校园标准的制定者和践行者,是华为智慧校园解决方案核心全链条服务商。 + +该公司服务客户遍及中国及全球80个主要城市,已交付项目741个,总建筑面积超过1.56亿平方米,业务涵盖高端住宅、商业综合体、超级写字楼、政府物业、工业园区等多种建筑类型。 + +近年来,随着业务的拓展和园区业主对服务品质要求的不断提升,兴海物联致力于利用边缘计算和物联网技术构建可持续发展的智慧校园,提高园区运营和管理效率。 + +## 解决方案 + +如今兴海物联的服务领域越来越广泛,因此其解决方案需要具备可移植性和可复制性,需要保证数据的实时处理和安全的存储。KubeEdge以云原生开发和边云协同为设计理念,已成为兴海物联打造智慧校园不可或缺的一部分。 + +- 容器镜像一次构建,随处运行,有效降低新建园区部署运维复杂度。 +- 边云协同使数据在边缘处理,确保实时性和安全性,并降低网络带宽成本。 +- KubeEdge 可以轻松添加硬件,并支持常见协议。无需二次开发。 + +## 优势 + +兴海物联基于KubeEdge和自有兴海物联云平台,构建了云边端协同的智慧校园,大幅提升了校园管理效率。在AI的助力下,近30%的重复性工作实现了自动化。未来,兴海物联还将继续与KubeEdge合作,推出基于KubeEdge的智慧校园解决方案。 \ No newline at end of file diff --git a/i18n/zh/docusaurus-plugin-content-pages/case-studies/jingying/index.mdx b/i18n/zh/docusaurus-plugin-content-pages/case-studies/jingying/index.mdx new file mode 100644 index 0000000000..7ae66437d9 --- /dev/null +++ b/i18n/zh/docusaurus-plugin-content-pages/case-studies/jingying/index.mdx @@ -0,0 +1,30 @@ +--- +date: 2024-05-28 +title: 精英数智科技股份有限公司 +subTitle: +description: 精英数智科技与KubeEdge合作开发矿脑解决方案,覆盖云、边、端,让煤炭生产更安全。 +tags: + - 解决方案 +--- + +# 基于KubeEdge的矿山大脑解决方案 + +## 商业背景 + +精英数智科技有限公司专注于为煤矿及瓦斯企业提供安全监控管理解决方案,提供可靠稳定的数据采集传输、现场感知、风险预测、智能监管等解决方案,帮助企业提高生产安全性,降低管理成本。 +精英数智科技利用AIoT和云边端协同,构建高危行业安全生产智能感知网络,推动新一代信息技术与安全生产的深度融合。 + +## 解决方案 + +精英数智科技有限公司与KubeEdge合作开发了矿山大脑解决方案,该方案覆盖云、边、端,让煤炭生产更安全。该方案具有以下优势: + +- KubeEdge兼容Kubernetes生态,支持Kubernetes应用平滑迁移到KubeEdge,大幅提升部署效率。 +- AI模型在云端训练,模型推理在边缘进行,大大提高资源利用率和推理速度。 +- 即使边缘节点与云端断开连接,服务实例也能自动恢复并正常运行,使系统更加可靠。 +- 边缘智能、强大的计算能力以及对海量边缘设备的管理,使得多种场景的精准音视频识别成为可能。 +精英数智科技有限公司在多年积累的基础上,具备了丰富的AI场景能力和云边端运维能力,有效保障了服务的可靠和识别的精准。 + +## 优势 + +山西煤矿企业通过矿山大脑解决方案,已实现千余座矿井的智能化开采,云端下发的AI分析算法实时风险评估,识别率高达98%,远程IT基础设施集中监控降低运维成本65%,全栈IT设备集成部署降低部署成本75%, +矿山大脑助力煤炭行业安全生产,最终实现全行业智能化升级。精英数智科技有限公司将继续与KubeEdge携手,利用AI、IoT、大数据等技术,为煤炭行业安全生产推出全方位的智能边缘解决方案。 \ No newline at end of file diff --git a/src/components/supporters/index.js b/src/components/supporters/index.js index 98d3452e59..bd17c6f127 100644 --- a/src/components/supporters/index.js +++ b/src/components/supporters/index.js @@ -207,6 +207,12 @@ const supportList = [ name: "SF Technology", img_src: "img/supporters/sf-tech.png", external_link: "https://www.sf-tech.com.cn/", + }, + + { + name: "LookCan Ai", + img_src: "img/supporters/lookcan-logo.svg", + external_link: "https://www.lookcan.ai/", } ]; @@ -217,7 +223,7 @@ export default function Supporters() {

- + Join the Growing diff --git a/src/components/supporters/index.scss b/src/components/supporters/index.scss index 0cb5380619..134a23958a 100644 --- a/src/components/supporters/index.scss +++ b/src/components/supporters/index.scss @@ -65,8 +65,15 @@ } } +.joins { + color:black; +} + html[data-theme="dark"] { .supporterContainer { background-color: #242526; } + .joins { + color:#c4c4d1; + } } diff --git a/src/pages/case-studies/Raisecom-Tech/index.mdx b/src/pages/case-studies/Raisecom-Tech/index.mdx new file mode 100644 index 0000000000..847962c07f --- /dev/null +++ b/src/pages/case-studies/Raisecom-Tech/index.mdx @@ -0,0 +1,24 @@ +--- +date: 2024-05-27 +title: Raisecom Technology CO.,Ltd +subTitle: +description: Using KubeEdge as an important part of the implementation of the intelligent monitoring solution effectively completes the AI monitoring of factory safety, reduces the occurrence of safety accidents, and improves the production efficiency of the factory. + +tags: + - UserCase +--- + +# Intelligent monitoring solution based on KubeEdge + +## Challenge +It is an important demand for the manufactory of Raisecom Technology to ensure the industrial production safety. Traditional workers' production safety was detected manually, which was slow and inefficient. The situation that workers did not obey the safety requirements still happened, and it could be ignored sometimes, which could generate great safety risks and affect the production efficiency of the factory. + +## Solution +An industrial intelligent monitoring application with AI algorithms was developed to replace the manual method. An intelligent application alone was not enough and new problems arose such as the deployment and management of the intelligent edge application and the collaboration between training on the cloud and reasoning on the edge, which could become a bottleneck for the largescale application of the solution in the industrial production environment. + +China Telecom Research Institute used KubeEdge as an important part of the implementation of the intelligent monitoring solution to help Raisecom Technology to solve the problem. Architect Xiaohou Shi from China Telecom Research Institute completed the design of this solution. In this case, the safety status of factory workers was monitored by the industrial vision application in real time with the deep learning algorithm. KubeEdge was introduced as an edge computing platform for the management of the edge devices and the running environment of the intelligent monitoring application. The monitoring model could be trained on the cloud and deployed to the edge nodes for reasoning execution automatically via KubeEdge, which could improve the efficiency of the operation and reduce the cost of the maintenance. + +## Impact +In this application scenario, KubeEdge completed the unified management of edge applications. KubeEdge could also make full use of the advantages of the collaboration of the cloud and edge. With the help of KubeEdge as the edge computing platform, the monitoring on safety of the manufactory with AI was completed effectively, which reduced the occurrence of safety accidents and improved the production efficiency of the manufactory. + +Based on this successful case, more deep learning algorithm will be deployed on KubeEdge to handle problems on edge computing. More cooperation about scenario-faced industrial intelligent application with KubeEdge will be carried out in the future. diff --git a/src/pages/case-studies/XingHai/index.mdx b/src/pages/case-studies/XingHai/index.mdx new file mode 100644 index 0000000000..28955d8761 --- /dev/null +++ b/src/pages/case-studies/XingHai/index.mdx @@ -0,0 +1,30 @@ +--- +date: 2024-05-27 +title: XingHai IoT +subTitle: +description: Xinghai IoT uses KubeEdge to build a smart campus with cloud-edge-device collaboration, which greatly improves campus management efficiency. +tags: + - UserCase +--- + +# Building smart campuses based on KubeEdge + +## Challenge + +Xinghai IoT is an IoT company that provides comprehensive smart building solutions by leveraging a construction IoT platform, intelligent hardware, and AI. It is a creator and practitioner of smart campus standards for China Overseas Property Management and a core full-chain service provider of smart campus solutions from Huawei. + +The company serves its customers in 80 major cities in China and around the world. It has delivered 741 projects, covering more than 156 million square meters. Its business covers a diverse range of building types, such as high-end residential buildings, commercial complexes, super office buildings, government properties, and industrial parks. + +In recent years, as its business expands and occupant demands for service quality grow, Xinghai IoT has been committed to using edge computing and IoT to build sustainable smart campuses, improving efficiency for campus operations and management. + +## Highlights + +Xinghai IoT now offers services in a wide range of areas. Therefore, its solutions should be portable and replicable and need to ensure real-time data processing and secure data storage. KubeEdge, with services designed for cloud native development and edge-cloud synergy, has become an indispensable part of Xinghai IoT for building smart campuses. + +- Container images are built once to run anywhere, effectively reducing the deployment and O&M complexity of new campuses. +- Edge-cloud synergy enables data to be processed at the edge, ensuring real-time performance and security and lowering network bandwidth costs. +- KubeEdge makes adding hardware easy and supports common protocols. No secondary development is needed. + +## Benefits + +Xinghai IoT built a smart campus with cloud-edge-device synergy based on KubeEdge and its own Xinghai IoT cloud platform, greatly improving the efficiency of campus management. With AI assistance, nearly 30% of the repetitive work is automated. In the future, Xinghai IoT will continue to collaborate with KubeEdge to launch KubeEdge-based smart campus solutions. \ No newline at end of file diff --git a/src/pages/case-studies/jingying/index.mdx b/src/pages/case-studies/jingying/index.mdx new file mode 100644 index 0000000000..3a0600fd79 --- /dev/null +++ b/src/pages/case-studies/jingying/index.mdx @@ -0,0 +1,33 @@ +--- +date: 2024-05-28 +title: Jingying Shuzhi Technology Co., Ltd +subTitle: +description: Jingying Shuzhi Technology Co., Ltd worked with KubeEdge to develop the Mine Brain solution, which covers the cloud, edge, and devices and makes coal production safer. +tags: + - Solution +--- + +# Mining brain solution based on KubeEdge + +## Business Background + +Jingying Shuzhi Technology Co., Ltd focuses on providing security monitoring and management solutions for coal mining and gas enterprises. Their solutions cover reliable, stable data collection and transmission, +on-site perception, risk prediction, and intelligent supervision to help these enterprises improve production security and reduce management costs. +By leveraging AIoT and cloud-edge-device synergy, Jingying Shuzhi Technology Co., Ltd has built an intelligent sensing network for safe production in +high-risk industries, promoting the in-depth integration of next-generation information technologies and safe production. + +## Highlights + +Jingying Shuzhi Technology Co., Ltd worked with KubeEdge to develop the Mine Brain solution, which covers the cloud, edge, and devices and makes coal production safer. +This solution has the following advantages: + +- KubeEdge is compatible with the Kubernetes ecosystem. It allows Kubernetes applications to be smoothly migrated to KubeEdge, greatly improving deployment efficiency. +- AI models are trained on the cloud and model inference is performed on the edge, greatly improving resource utilization and inference speed. +- Service instances can recover automatically and run normally even if edge nodes are disconnected from the cloud, so the system is more reliable. +- Edge intelligence, powerful computing, and management of a massive number of edge devices makes precise audio and video recognition possible for a range of different scenarios. +With a foundation based on years of accumulated experience, Jingying Shuzhi Technology Co., Ltd has developed the ability to handle many AI scenarios and cloud-edge-device O&M, effectively ensuring reliable services and precise recognition. + +## Benefits + +Xinghai IoT built a smart campus with cloud-edge-device synergy based on KubeEdge and its own Xinghai IoT cloud platform, greatly improving the efficiency of campus management. +With AI assistance, nearly 30% of the repetitive work is automated. In the future, Xinghai IoT will continue to collaborate with KubeEdge to launch KubeEdge-based smart campus solutions. \ No newline at end of file diff --git a/static/img/supporters/lookcan-logo.svg b/static/img/supporters/lookcan-logo.svg new file mode 100644 index 0000000000..63358af707 --- /dev/null +++ b/static/img/supporters/lookcan-logo.svg @@ -0,0 +1,17 @@ + + + + + + + + + + + + + + + + diff --git a/versionsArchived.json b/versionsArchived.json index 6daa34e043..13397aeeda 100644 --- a/versionsArchived.json +++ b/versionsArchived.json @@ -1,7 +1,7 @@ { - "Next": "https://kubeedge.io/docs/", + "Next": "/docs/", "v1.17": "https://release-1-17.docs.kubeedge.io/docs/", "v1.16": "https://release-1-16.docs.kubeedge.io/docs/", "v1.15": "https://release-1-15.docs.kubeedge.io/docs/", "v1.14": "https://release-1-14.docs.kubeedge.io/docs/" -} +} \ No newline at end of file