diff --git a/docs/usage/install/overlay/get-started-calico-zh_cn.md b/docs/usage/install/overlay/get-started-calico-zh_cn.md index af41cb1aa5..f726502c83 100644 --- a/docs/usage/install/overlay/get-started-calico-zh_cn.md +++ b/docs/usage/install/overlay/get-started-calico-zh_cn.md @@ -88,10 +88,43 @@ status: serviceCIDR: - 10.233.0.0/18 ``` + +> 目前 Spiderpool 优先通过查询 `kube-system/kubeadm-config` ConfigMap 获取集群的 Pod 和 Service 子网。 如果 kubeadm-config 不存在导致无法获取集群子网,那么 Spiderpool 会从 Kube-controller-manager Pod 中获取集群 Pod 和 Service 的子网。 如果您集群的 Kube-controller-manager 组件以 `systemd` 方式而不是以静态 Pod 运行。那么 Spiderpool 仍然无法获取集群的子网信息。 -> 1.如果 phase 不为 Synced, 那么将会阻止 Pod 被创建 -> -> 2.如果 overlayPodCIDR 不正常, 可能会导致通信问题 +如果上面两种方式都失败,Spiderpool 会同步 status.phase 为 NotReady, 这将会阻止 Pod 被创建。我们可以通过下面两种方式解决异常情况: + +- 手动创建 kubeadm-config ConfigMap, 并正确配置集群的子网信息: + +```shell +export POD_SUBNET= +export SERVICE_SUBNET= +cat << EOF | kubectl apply -f - +apiVersion: v1 +kind: ConfigMap +metadata: + name: kubeadm-config + namespace: kube-system +data: + ClusterConfiguration: + networking: + podSubnet: ${POD_SUBNET} + serviceSubnet: ${SERVICE_SUBNET} +EOF +``` + +一旦创建完成,Spiderpool 将会自动同步其状态。 + +- 设置 `podCIDRType` 为 none, 这种情况下 Spiderpool 将主动不会同步集群的子网信息。您可以向 `hijackCIDR`字段手动添加集群的子网信息: + +```yaml +... + podCIDRType: none + hijackCIDR: + - 169.254.0.0/16 + - 10.244.0.0/16 # 集群的 Pod 子网 + - 10.69.0.0/16 # 集群的 Service 子网 +... +``` ### 创建 SpiderIPPool diff --git a/docs/usage/install/overlay/get-started-calico.md b/docs/usage/install/overlay/get-started-calico.md index 4bdc8cbab1..8116ec31f8 100644 --- a/docs/usage/install/overlay/get-started-calico.md +++ b/docs/usage/install/overlay/get-started-calico.md @@ -84,9 +84,42 @@ status: - 10.233.0.0/18 ``` -> 1.If the phase is not synced, the pod will be prevented from being created. -> -> 2.If the overlayPodCIDR does not meet expectations, it may cause pod communication issue. +> At present, Spiderpool prioritizes obtaining the cluster's Pod and Service subnets by querying the kube-system/kubeadm-config ConfigMap. If the kubeadm-config does not exist, causing the failure to obtain the cluster subnet, Spiderpool will attempt to retrieve the cluster Pod and Service subnets from the kube-controller-manager Pod. If the kube-controller-manager component in your cluster runs in systemd mode instead of as a static Pod, Spiderpool still cannot retrieve the cluster's subnet information. + +If both of the above methods fail, Spiderpool will synchronize the status.phase as NotReady, preventing Pod creation. To address such abnormal situations, we can take either of the following two approaches: + +- Manually create the kubeadm-config ConfigMap and correctly configure the cluster's subnet information: + +```shell +export POD_SUBNET= +export SERVICE_SUBNET= +cat << EOF | kubectl apply -f - +apiVersion: v1 +kind: ConfigMap +metadata: + name: kubeadm-config + namespace: kube-system +data: + ClusterConfiguration: + networking: + podSubnet: ${POD_SUBNET} + serviceSubnet: ${SERVICE_SUBNET} +EOF +``` + +Once created, Spiderpool will automatically synchronize its status. + +- Set spidercoordinator.podCIDRType to none. In this case, Spiderpool will not synchronize the cluster's subnet information. You can add the cluster's subnet information to the hijackCIDR field: + +```yaml +... + podCIDRType: none + hijackCIDR: + - 169.254.0.0/16 + - 10.244.0.0/16 # the Pod CIDR + - 10.69.0.0/16 # the Service CIDR +... +``` ### Create SpiderIPPool diff --git a/docs/usage/install/overlay/get-started-cilium-zh_cn.md b/docs/usage/install/overlay/get-started-cilium-zh_cn.md index d831bbb6ae..659f9ba32e 100644 --- a/docs/usage/install/overlay/get-started-cilium-zh_cn.md +++ b/docs/usage/install/overlay/get-started-cilium-zh_cn.md @@ -85,9 +85,42 @@ status: - 10.233.0.0/18 ``` -> 1.如果 phase 不为 Synced, 那么将会阻止 Pod 被创建 -> -> 2.如果 overlayPodCIDR 不正常, 可能会导致通信问题 +> 目前 Spiderpool 优先通过查询 `kube-system/kubeadm-config` ConfigMap 获取集群的 Pod 和 Service 子网。 如果 kubeadm-config 不存在导致无法获取集群子网,那么 Spiderpool 会从 Kube-controller-manager Pod 中获取集群 Pod 和 Service 的子网。 如果您集群的 Kube-controller-manager 组件以 `systemd` 方式而不是以静态 Pod 运行。那么 Spiderpool 仍然无法获取集群的子网信息。 + +如果上面两种方式都失败,Spiderpool 会同步 status.phase 为 NotReady, 这将会阻止 Pod 被创建。我们可以通过下面两种方式解决异常情况: + +- 手动创建 kubeadm-config ConfigMap, 并正确配置集群的子网信息: + +```shell +export POD_SUBNET= +export SERVICE_SUBNET= +cat << EOF | kubectl apply -f - +apiVersion: v1 +kind: ConfigMap +metadata: + name: kubeadm-config + namespace: kube-system +data: + ClusterConfiguration: | + networking: + podSubnet: ${POD_SUBNET} + serviceSubnet: ${SERVICE_SUBNET} +EOF +``` + +一旦创建完成,Spiderpool 将会自动同步其状态。 + +- 设置 `podCIDRType` 为 none, 这种情况下 Spiderpool 将主动不会同步集群的子网信息。您可以向 `hijackCIDR`字段手动添加集群的子网信息: + +```yaml +... + podCIDRType: none + hijackCIDR: + - 169.254.0.0/16 + - 10.244.0.0/16 # 集群的 Pod 子网 + - 10.69.0.0/16 # 集群的 Service 子网 +... +``` ### 创建 SpiderIPPool diff --git a/docs/usage/install/overlay/get-started-cilium.md b/docs/usage/install/overlay/get-started-cilium.md index 281279d5da..9bd5075b07 100644 --- a/docs/usage/install/overlay/get-started-cilium.md +++ b/docs/usage/install/overlay/get-started-cilium.md @@ -85,9 +85,42 @@ status: - 10.233.0.0/18 ``` -> 1.If the phase is not synced, the pod will be prevented from being created. -> -> 2.If the overlayPodCIDR does not meet expectations, it may cause pod communication issue. +> At present, Spiderpool prioritizes obtaining the cluster's Pod and Service subnets by querying the kube-system/kubeadm-config ConfigMap. If the kubeadm-config does not exist, causing the failure to obtain the cluster subnet, Spiderpool will attempt to retrieve the cluster Pod and Service subnets from the kube-controller-manager Pod. If the kube-controller-manager component in your cluster runs in systemd mode instead of as a static Pod, Spiderpool still cannot retrieve the cluster's subnet information. + +If both of the above methods fail, Spiderpool will synchronize the status.phase as NotReady, preventing Pod creation. To address such abnormal situations, we can take either of the following two approaches: + +- Manually create the kubeadm-config ConfigMap and correctly configure the cluster's subnet information: + +```shell +export POD_SUBNET= +export SERVICE_SUBNET= +cat << EOF | kubectl apply -f - +apiVersion: v1 +kind: ConfigMap +metadata: + name: kubeadm-config + namespace: kube-system +data: + ClusterConfiguration: + networking: + podSubnet: ${POD_SUBNET} + serviceSubnet: ${SERVICE_SUBNET} +EOF +``` + +Once created, Spiderpool will automatically synchronize its status. + +- Set spidercoordinator.podCIDRType to none. In this case, Spiderpool will not synchronize the cluster's subnet information. You can add the cluster's subnet information to the hijackCIDR field: + +```yaml +... + podCIDRType: none + hijackCIDR: + - 169.254.0.0/16 + - 10.244.0.0/16 # the Pod CIDR + - 10.69.0.0/16 # the Service CIDR +... +``` ### Create SpiderIPPool diff --git a/pkg/coordinatormanager/coordinator_informer.go b/pkg/coordinatormanager/coordinator_informer.go index 6b61259e6c..e29f63772b 100644 --- a/pkg/coordinatormanager/coordinator_informer.go +++ b/pkg/coordinatormanager/coordinator_informer.go @@ -334,36 +334,8 @@ func (cc *CoordinatorController) syncHandler(ctx context.Context, coordinatorNam } func (cc *CoordinatorController) fetchPodAndServerCIDR(ctx context.Context, logger *zap.Logger, coordCopy *spiderpoolv2beta1.SpiderCoordinator) (*spiderpoolv2beta1.SpiderCoordinator, error) { - var err error - var cmPodList corev1.PodList - if err := cc.APIReader.List(ctx, &cmPodList, client.MatchingLabels{"component": "kube-controller-manager"}); err != nil { - event.EventRecorder.Eventf( - coordCopy, - corev1.EventTypeWarning, - "ClusterNotReady", - err.Error(), - ) - - setStatus2NoReady(logger, coordCopy) - return coordCopy, err - } - if len(cmPodList.Items) == 0 { - msg := `Failed to get kube-controller-manager Pod with label "component: kube-controller-manager"` - event.EventRecorder.Eventf( - coordCopy, - corev1.EventTypeWarning, - "ClusterNotReady", - msg, - ) - - setStatus2NoReady(logger, coordCopy) - return coordCopy, err - } - - k8sPodCIDR, k8sServiceCIDR := extractK8sCIDR(&cmPodList.Items[0]) if *coordCopy.Spec.PodCIDRType == auto { - var podCidrType string - podCidrType, err = fetchType(cc.DefaultCniConfDir) + podCidrType, err := fetchType(cc.DefaultCniConfDir) if err != nil { if apierrors.IsNotFound(err) { event.EventRecorder.Eventf( @@ -381,6 +353,37 @@ func (cc *CoordinatorController) fetchPodAndServerCIDR(ctx context.Context, logg coordCopy.Spec.PodCIDRType = &podCidrType } + if *coordCopy.Spec.PodCIDRType == none { + coordCopy.Status.Phase = Synced + coordCopy.Status.OverlayPodCIDR = []string{} + coordCopy.Status.ServiceCIDR = []string{} + return coordCopy, nil + } + + var err error + var cm *corev1.ConfigMap + var k8sPodCIDR, k8sServiceCIDR []string + if err := cc.APIReader.Get(ctx, types.NamespacedName{Namespace: metav1.NamespaceSystem, Name: "kubeadm-config"}, cm); err == nil && cm != nil { + logger.Sugar().Infof("Trying to fetch the ClusterCIDR from kube-system/kubeadm-config") + k8sPodCIDR, k8sServiceCIDR = extractK8sCIDRFromKubeadmConfigMap(cm) + } else { + logger.Sugar().Warn("kube-system/kubeadm-config is no found, trying to fetch the ClusterCIDR from kube-controller-manager Pod") + var cmPodList corev1.PodList + err = cc.APIReader.List(ctx, &cmPodList, client.MatchingLabels{"component": "kube-controller-manager"}) + if err != nil { + logger.Sugar().Errorf("failed to get kube-controller-manager Pod with label \"component: kube-controller-manager\": %v", err) + event.EventRecorder.Eventf( + coordCopy, + corev1.EventTypeWarning, + "ClusterNotReady", + "Neither kubeadm-config ConfigMap nor kube-controller-manager Pod can be found", + ) + setStatus2NoReady(logger, coordCopy) + return coordCopy, err + } + k8sPodCIDR, k8sServiceCIDR = extractK8sCIDRFromKCMPod(&cmPodList.Items[0]) + } + switch *coordCopy.Spec.PodCIDRType { case cluster: if cc.caliCtrlCanncel != nil { @@ -397,9 +400,6 @@ func (cc *CoordinatorController) fetchPodAndServerCIDR(ctx context.Context, logg if err = cc.fetchCiliumCIDR(ctx, logger, k8sPodCIDR, coordCopy); err != nil { return coordCopy, err } - case none: - coordCopy.Status.Phase = Synced - coordCopy.Status.OverlayPodCIDR = []string{} } coordCopy.Status.ServiceCIDR = k8sServiceCIDR @@ -538,7 +538,42 @@ func (cc *CoordinatorController) fetchCiliumCIDR(ctx context.Context, logger *za return nil } -func extractK8sCIDR(kcm *corev1.Pod) ([]string, []string) { +func extractK8sCIDRFromKubeadmConfigMap(cm *corev1.ConfigMap) ([]string, []string) { + var podCIDR, serviceCIDR []string + + podReg := regexp.MustCompile(`podSubnet: (.*)`) + serviceReg := regexp.MustCompile(`serviceSubnet: (.*)`) + + var podSubnets, serviceSubnets []string + for _, data := range cm.Data { + podSubnets = podReg.FindStringSubmatch(data) + serviceSubnets = serviceReg.FindStringSubmatch(data) + } + + if len(podSubnets) != 0 { + for _, cidr := range strings.Split(podSubnets[1], ",") { + _, _, err := net.ParseCIDR(cidr) + if err != nil { + continue + } + podCIDR = append(podCIDR, cidr) + } + } + + if len(serviceSubnets) != 0 { + for _, cidr := range strings.Split(serviceSubnets[1], ",") { + _, _, err := net.ParseCIDR(cidr) + if err != nil { + continue + } + serviceCIDR = append(serviceCIDR, cidr) + } + } + + return podCIDR, serviceCIDR +} + +func extractK8sCIDRFromKCMPod(kcm *corev1.Pod) ([]string, []string) { var podCIDR, serviceCIDR []string podReg := regexp.MustCompile(`--cluster-cidr=(.*)`)