diff --git a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/configure-control-plane-monitoring.mdx b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/configure-control-plane-monitoring.mdx index 4c1fa60a2d4..16a2e05ad14 100644 --- a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/configure-control-plane-monitoring.mdx +++ b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/configure-control-plane-monitoring.mdx @@ -33,7 +33,7 @@ We monitor and collect [metrics](/docs/integrations/kubernetes-integration/under ## Control plane component [#component] -The task of monitoring the Kubernetes control plane is a responsibility of the `nrk8s-controlplane` component, which by default is deployed as a DaemonSet. This component is automatically deployed to master nodes, through the use of a default list of `nodeSelectorTerms` which includes labels commonly used to identify master nodes, such as `node-role.kubernetes.io/control-plane` or `node-role.kubernetes.io/master`. Regardless, this selector is exposed in the `values.yml` file and therefore can be reconfigured to fit other environments. +The task of monitoring the Kubernetes control plane is a responsibility of the `nrk8s-controlplane` component, which by default is deployed as a DaemonSet. This component is automatically deployed to control plane nodes, through the use of a default list of `nodeSelectorTerms` which includes labels commonly used to identify control plane nodes, such as `node-role.kubernetes.io/control-plane`. Regardless, this selector is exposed in the `values.yml` file and therefore can be reconfigured to fit other environments. Clusters that do not have any node matching these selectors will not get any pod scheduled, thus not wasting any resources and being functionally equivalent of disabling control plane monitoring altogether by setting `controlPlane.enabled` to `false` in the Helm Chart. @@ -47,7 +47,7 @@ Each component of the control plane has a dedicated section, which allows to ind Diagram showing a possible configuration scraping etcd with mTLS and API server with bearer Token. The monitoring is a DaemonSet deployed on master nodes only. @@ -153,7 +153,7 @@ Our integration accepts a secret with the following keys: These certificates should be signed by the same CA etcd is using to operate. -How to generate these certificates is out of the scope of this documentation, as it will vary greatly between different Kubernetes distribution. Please refer to your distribution's documentation to see how to fetch the required etcd peer certificates. In Kubeadm, for example, they can be found in `/etc/kubernetes/pki/etcd/peer.{crt,key}` in the master node. +How to generate these certificates is out of the scope of this documentation, as it will vary greatly between different Kubernetes distribution. Please refer to your distribution's documentation to see how to fetch the required etcd peer certificates. In Kubeadm, for example, they can be found in `/etc/kubernetes/pki/etcd/peer.{crt,key}` in the control plane node. Once you have located or generated the etcd peer certificates, you should rename the files to match the keys we expect to be present in the secret, and create the secret in the cluster diff --git a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/k8s-version2/errors.mdx b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/k8s-version2/errors.mdx index 84f9269939e..e306ff4c9c2 100644 --- a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/k8s-version2/errors.mdx +++ b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/k8s-version2/errors.mdx @@ -138,37 +138,33 @@ If you're running version 2, check out these common Kubernetes integration error - Execute the following commands to manually find the master nodes: + Execute the following commands to manually find the control plane nodes: ```shell - kubectl get nodes -l node-role.kubernetes.io/master="" + kubectl get nodes -l node-role.kubernetes.io/control-plane="" ``` - ```shell - kubectl get nodes -l kubernetes.io/role="master" - ``` - - If the master nodes follow the labeling convention defined in the [Control plane component](/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/configure-control-plane-monitoring/#component), you should get some output like: + If the control plane nodes follow the labeling convention defined in the [Control plane component](/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/configure-control-plane-monitoring/#component), you should get some output like: ```shell - NAME STATUS ROLES AGE VERSION - ip-10-42-24-4.ec2.internal Ready master 42d v1.14.8 + NAME STATUS ROLES AGE VERSION + ip-10-42-24-4.ec2.internal Ready control-plane 42d v1.14.8 ``` If no nodes are found, there are two scenarios: - Your master nodes don't have the required labels that identify them as masters. In this case, you need to add both labels to your master nodes. + Your control plane nodes don't have the required labels that identify them as control planes. In this case, you need to add both labels to your control plane nodes. - You're in a managed cluster and your provider is handling the master nodes for you. In this case, there is nothing you can do, since your provider is limiting the access to those nodes. + You're in a managed cluster and your provider is handling the control plane nodes for you. In this case, there is nothing you can do, since your provider is limiting the access to those nodes. - To identify an integration pod running on a master node, replace `NODE_NAME` in the following command with one of the node names listed in the previous step: + To identify an integration pod running on a control plane node, replace `NODE_NAME` in the following command with one of the node names listed in the previous step: ```shell kubectl get pods --field-selector spec.nodeName=NODE_NAME -l name=newrelic-infra --all-namespaces @@ -177,7 +173,7 @@ If you're running version 2, check out these common Kubernetes integration error The next command is the same as the previous one, just that it selects the node for you: ```shell - kubectl get pods --field-selector spec.nodeName=$(kubectl get nodes -l node-role.kubernetes.io/master="" -o jsonpath="{.items[0].metadata.name}") -l name=newrelic-infra --all-namespaces + kubectl get pods --field-selector spec.nodeName=$(kubectl get nodes -l node-role.kubernetes.io/control-plane="" -o jsonpath="{.items[0].metadata.name}") -l name=newrelic-infra --all-namespaces ``` If everything is correct you should get some output like: @@ -187,7 +183,7 @@ If you're running version 2, check out these common Kubernetes integration error newrelic-infra-whvzt 1/1 Running 0 6d20h ``` - If the integration is not running on your master nodes, check that the daemonset has all the desired instances running and ready. + If the integration is not running on your control plane nodes, check that the daemonset has all the desired instances running and ready. ```shell kubectl get daemonsets -l app=newrelic-infra --all-namespaces @@ -198,7 +194,7 @@ If you're running version 2, check out these common Kubernetes integration error id="indicators" title="Check that the control plane components have the required labels" > - Refer to the [discovery of master nodes and control plane components documentation section](/docs/integrations/kubernetes-integration/installation/configure-control-plane-monitoring#discover-nodes-components) and look for the labels the integration uses to discover the components. Then run the following commands to see if there are any pods with such labels and the nodes where they are running: + Refer to the [discovery of control plane nodes and components documentation section](/docs/integrations/kubernetes-integration/installation/configure-control-plane-monitoring#discover-nodes-components) and look for the labels the integration uses to discover the components. Then run the following commands to see if there are any pods with such labels and the nodes where they are running: ```shell kubectl get pods -l k8s-app=kube-apiserver --all-namespaces @@ -228,9 +224,9 @@ If you're running version 2, check out these common Kubernetes integration error - To retrieve the logs, follow the instructions on [get logs from pod running on a master node](/docs/integrations/kubernetes-integration/troubleshooting/get-logs-version). The integration logs for every component the following message `Running job: COMPONENT_NAME`. Fro example: + To retrieve the logs, follow the instructions on [get logs from pod running on a control plane node](/docs/integrations/kubernetes-integration/troubleshooting/get-logs-version). The integration logs for every component the following message `Running job: COMPONENT_NAME`. Fro example: ```shell Running job: scheduler @@ -270,7 +266,7 @@ If you're running version 2, check out these common Kubernetes integration error The following command does the same as the previous one, but also chooses the pod for you: ```shell - kubectl exec -ti $(kubectl get pods --all-namespaces --field-selector spec.nodeName=$(kubectl get nodes -l node-role.kubernetes.io/master="" -o jsonpath="{.items[0].metadata.name}") -l name=newrelic-infra -o jsonpath="{.items[0].metadata.name}") -- wget -O - localhost:10251/metrics + kubectl exec -ti $(kubectl get pods --all-namespaces --field-selector spec.nodeName=$(kubectl get nodes -l node-role.kubernetes.io/control-plane="" -o jsonpath="{.items[0].metadata.name}") -l name=newrelic-infra -o jsonpath="{.items[0].metadata.name}") -- wget -O - localhost:10251/metrics ``` If everything is correct, you should get some metrics on the Prometheus format, something like this: diff --git a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/k8s-version2/overview.mdx b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/k8s-version2/overview.mdx index 9a7ed63dbca..dd2dd63a480 100644 --- a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/k8s-version2/overview.mdx +++ b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/k8s-version2/overview.mdx @@ -28,9 +28,9 @@ Please note that these versions had a less flexible autodiscovery options, and d In versions lower than v3, when the integration is deployed using `privileged: false`, the `hostNetwork` setting for the control plane component will be also be set to `false`. -### Discovery of master nodes and control plane components [#discover-nodes-components] +### Discovery of control plane nodes and control plane components [#discover-nodes-components] -The Kubernetes integration relies on the [`kubeadm`](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) labeling conventions to discover the master nodes and the control plane components. This means that master nodes should be labeled with `node-role.kubernetes.io/master=""` or `kubernetes.io/role="master"`. +The Kubernetes integration relies on the [`kubeadm`](https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/) labeling conventions to discover the control plane nodes and the control plane components. This means that control plane nodes should be labeled with `node-role.kubernetes.io/control-plane=""`. The control plane components should have either the `k8s-app` or the `tier` and `component` labels. See this table for accepted label combinations and values: @@ -158,11 +158,11 @@ The control plane components should have either the `k8s-app` or the `tier` and -When the integration detects that it's running inside a master node, it tries to find which components are running on the node by looking for pods that match the labels listed in the table above. For every running component, the integration makes a request to its metrics endpoint. +When the integration detects that it's running inside a control plane node, it tries to find which components are running on the node by looking for pods that match the labels listed in the table above. For every running component, the integration makes a request to its metrics endpoint. ### Configuration -Control plane monitoring is automatic for agents running inside master nodes. The only component that requires an extra step to run is etcd, because it uses mutual TLS authentication (mTLS) for client requests. The API Server can also be configured to be queried using the [Secure Port](https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/#api-server-ports-and-ips). +Control plane monitoring is automatic for agents running inside control plane nodes. The only component that requires an extra step to run is etcd, because it uses mutual TLS authentication (mTLS) for client requests. The API Server can also be configured to be queried using the [Secure Port](https://kubernetes.io/docs/reference/access-authn-authz/controlling-access/#api-server-ports-and-ips). Control plane monitoring for [OpenShift](http://learn.openshift.com/?extIdCarryOver=true&sc_cid=701f2000001OH7iAAG) 4.x requires additional configuration. For more information, see the [OpenShift 4.x Configuration](#openshift-4x-configuration) section. @@ -424,27 +424,21 @@ If you want to generate verbose logs and get version and configuration informati - To get the logs from a pod running on a master node: + To get the logs from a pod running on a control plane node: - 1. Get the nodes that are labelled as master: + 1. Get the nodes that are labelled as control plane: ```shell - kubectl get nodes -l node-role.kubernetes.io/master="" - ``` - - Or, - - ```shell - kubectl get nodes -l kubernetes.io/role="master" + kubectl get nodes -l node-role.kubernetes.io/control-plane="" ``` Look for output similar to this: ```shell - NAME STATUS ROLES AGE VERSION - ip-10-42-24-4.ec2.internal Ready master 42d v1.14.8 + NAME STATUS ROLES AGE VERSION + ip-10-42-24-4.ec2.internal Ready control-plane 42d v1.14.8 ``` 2. Get the New Relic pods that are running on one of the nodes returned in the previous step: diff --git a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/link-apm-applications-kubernetes.mdx b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/link-apm-applications-kubernetes.mdx index 37f2d9d0f8e..3721579b277 100644 --- a/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/link-apm-applications-kubernetes.mdx +++ b/src/content/docs/kubernetes-pixie/kubernetes-integration/advanced-configuration/link-apm-applications-kubernetes.mdx @@ -52,7 +52,7 @@ If you see a different result, follow the Kubernetes documentation to [enable ad ### Network requirements [#network-req] -For Kubernetes to talk to our `MutatingAdmissionWebhook`, the master node (or API server container, depending on how the cluster is set up) should allow egress for HTTPS traffic on port 443 to pods in all other nodes in the cluster. +For Kubernetes to talk to our `MutatingAdmissionWebhook`, the control plane node (or API server container, depending on how the cluster is set up) should allow egress for HTTPS traffic on port 443 to pods in all other nodes in the cluster. This may require specific configuration depending on how your infrastructure is set up (on-premises, AWS, Google Cloud, etc.). diff --git a/src/content/docs/kubernetes-pixie/kubernetes-integration/get-started/kubernetes-integration-compatibility-requirements.mdx b/src/content/docs/kubernetes-pixie/kubernetes-integration/get-started/kubernetes-integration-compatibility-requirements.mdx index 5483bf48aa4..e3f66485f49 100644 --- a/src/content/docs/kubernetes-pixie/kubernetes-integration/get-started/kubernetes-integration-compatibility-requirements.mdx +++ b/src/content/docs/kubernetes-pixie/kubernetes-integration/get-started/kubernetes-integration-compatibility-requirements.mdx @@ -71,7 +71,7 @@ Our integration is compatible and is continuously tested on the following Kubern - 1.26 to 1.30 + 1.27 to 1.31 diff --git a/src/content/docs/kubernetes-pixie/kubernetes-integration/installation/k8s-otel.mdx b/src/content/docs/kubernetes-pixie/kubernetes-integration/installation/k8s-otel.mdx index 05a4eebd7b3..86ffdb051b1 100644 --- a/src/content/docs/kubernetes-pixie/kubernetes-integration/installation/k8s-otel.mdx +++ b/src/content/docs/kubernetes-pixie/kubernetes-integration/installation/k8s-otel.mdx @@ -25,7 +25,7 @@ The [`nr-k8s-otel-collector`](https://github.com/newrelic/helm-charts/tree/maste * **Deamonset Collector**: Deployed on each worker node and responsible for gathering metrics from the underlying host in the node, the `cAdvisor`, the `Kubelet`, and collecting logs from the containers. -* **Deployment collector**: Deployed on the master node and responsible for gathering metrics of Kube state metrics and Kubernetes cluster events. +* **Deployment collector**: Deployed on the control plane node and responsible for gathering metrics of Kube state metrics and Kubernetes cluster events. -l app.kubernetes.io/component=controlplane -o wide diff --git a/src/content/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/deprecation-notice-v1.26-and-lower.mdx b/src/content/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/deprecation-notice-v1.26-and-lower.mdx new file mode 100644 index 00000000000..08d254677ed --- /dev/null +++ b/src/content/docs/release-notes/infrastructure-release-notes/kubernetes-integration-release-notes/deprecation-notice-v1.26-and-lower.mdx @@ -0,0 +1,25 @@ +--- +title: 'Deprecation notice: Kubernetes' +subject: Kubernetes integration +releaseDate: '2024-10-29' +--- + +Effective Tuesday, October 29, 2024, our Kubernetes integration drops support for Kubernetes v1.26 and lower. The Kubernetes integration v3.30.0 and higher will only be compatible with Kubernetes versions 1.27 and higher. For more information, read this note or contact your account team. + +## Background [#bg] + +Enabling compatibility with the latest Kubernetes versions and adding new features to our Kubernetes offering prevents us from offering first-class support to versions v1.26 and lower. + +## What's happening [#whats-happening] + +* Most major Kubernetes cloud providers have already deprecated v1.26 and lower. + +## What do you need to do [#what-to-do] + +It's easy: [Upgrade your Kubernetes clusters](/docs/integrations/kubernetes-integration/installation/kubernetes-installation-configuration#update) to a supported version. + +## What happens if you don't make any changes to your account [#account] + +The Kubernetes integration may continue to work with unsupported versions. However, we can't guarantee the quality of the solution as new releases may cause some incompatibilities. + +Please note that we won't accept support requests for these versions that have reached the end of life stage.