Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unable to get configmap/extension-apiserver-authentication in kube-system #688

Open
rr-ngoc-to opened this issue Jan 22, 2025 · 1 comment
Labels
needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.

Comments

@rr-ngoc-to
Copy link

rr-ngoc-to commented Jan 22, 2025

Hi everyone,
I am trying to implement Prometheus adapter, but I've got errors as below.
This is our system:
Prometheus is deployed in a K8s cluster, to access Prometheus, we need to use some tokens.
Prometheus Adapter is deployed in another K8s cluster.
Is there any solutions to fix this issue?

Error:

W0122 00:40:41.716634       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
F0122 00:40:41.716668       1 adapter.go:377] unable to fetch server: unable to load configmap based request-header-client-ca-file: configmaps "extension-apiserver-authentication" is forbidden: User "system:serviceaccount:rpay-dev-tools:tenant-pod-default" cannot get resource "configmaps" in API group "" in the namespace "kube-system"

Deployment file:

apiVersion: v1
kind: Service
metadata:
  name: ${APP_PROMETHEUS_ADAPTER}
  namespace: ${NAMESPACE}
spec:
  selector:
    app: ${APP_PROMETHEUS_ADAPTER}
  ports:
    - name: https
      protocol: TCP
      port: 443
      targetPort: 6443
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ${APP_PROMETHEUS_ADAPTER}
  namespace: ${NAMESPACE}
spec:
  replicas: 1
  selector:
    matchLabels:
      app: ${APP_PROMETHEUS_ADAPTER}
  template:
    metadata:
      labels:
        app: ${APP_PROMETHEUS_ADAPTER}
      annotations:
        sidecar.istio.io/inject: "false"
    spec:
      automountServiceAccountToken: true
      containers:
        - name: ${APP_PROMETHEUS_ADAPTER}
          image: ${APP_IMAGE}
          imagePullPolicy: IfNotPresent
          resources:
            limits:
              cpu: 1
              memory: 2Gi
            requests:
              cpu: 500m
              memory: 1Gi
          args:
            - /adapter
            - --prometheus-url=http://prom-ui-rpay-dev-jpw1-pslscanner-3000-mon-aas-prod.jpw1-caas1-dev1.caas.jpw1a.r-local.net/
            - --prometheus-header=Authorization=Bearer eyJhbGciOiJSUzI1NiIsInR5cCIgOiAiSldUIiwia2lkIiA6....
            - --config=/etc/adapter/config.yaml
            - --cert-dir=/var/run/serving-cert
            - --client-qps=50
            - --client-burst=100
            - --secure-port=6443
            - --metrics-relist-interval=1m
            - --v=6
            - --tls-cipher-suites=TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_ECDSA_WITH_CHACHA20_POLY1305,TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA,TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA,TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA,TLS_RSA_WITH_AES_128_GCM_SHA256,TLS_RSA_WITH_AES_256_GCM_SHA384,TLS_RSA_WITH_AES_128_CBC_SHA,TLS_RSA_WITH_AES_256_CBC_SHA
          volumeMounts:
            - name: config
              mountPath: /etc/adapter
            - name: tmpfs
              mountPath: /tmp
              readOnly: false
            - name: volume-serving-cert
              mountPath: /var/run/serving-cert
      volumes:
        - name: config
          configMap:
            name: custom-metrics-config
        - emptyDir: {}
          name: volume-serving-cert
        - name: tmpfs
          emptyDir: {}
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      serviceAccount: tenant-pod-default
      serviceAccountName: tenant-pod-default
      terminationGracePeriodSeconds: 5
@k8s-ci-robot k8s-ci-robot added the needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. label Jan 22, 2025
@k8s-ci-robot
Copy link
Contributor

This issue is currently awaiting triage.

If prometheus-adapter contributors determine this is a relevant issue, they will accept it by applying the triage/accepted label and provide further guidance.

The triage/accepted label can be added by org members by writing /triage accepted in a comment.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one.
Projects
None yet
Development

No branches or pull requests

2 participants