You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Describe the bug
Telepresence runs into a crashloop when a Pod on the cluster specifies a telepresence.getambassador.io/inject-container-ports annotation for a nonexistent port.
Context: I'm trying to get telepresence working with knative-serving (see also #1029 ) by injecting the agent via the knative Service object to avoid reconcile conflicts with telepresence intercept, which fails to adjust the deployment because knative serving will just overwrite it again.
However the agent installed by injection alone ( telepresence.getambassador.io/inject-traffic-agent: enabled ) targets the wrong port, so i tried specifying the name of the port on the deployment. This failed catastrophically: traffic-manager is now crashlooping and refusing to return to working order again.
Interestingly, none of the pods on the cluster contain an annotation referencing the port mentioned in the log below anymore, nor do any of the configmaps. i'm not sure where telepresence is getting this state from currently. might be retries on the mutating hook, not sure.
--> had to remove old knative revision objects to prune the existing deployments/services that still contained the incorrect annotation. interesting that telepresence even considers deployments that are scaled to 0, it seems.
Please use telepresence loglevel debug to ensure we have the most helpful logs,
reproduce the error, and then run telepresence gather-logs to create a
zip file of all logs for Telepresence's components (root and user daemons,
traffic-manager, and traffic-agents) and attach it to this issue. See an
example command below:
telepresence loglevel debug
* reproduce the error *
telepresence gather-logs --output-file /tmp/telepresence_logs.zip
# To see all options, run the following command
telepresence gather-logs --help
To Reproduce
Steps to reproduce the behavior:
see above
Expected behavior
A clear and concise description of what you expected to happen.
Versions (please complete the following information):
Output of telepresence version (preferably while telepresence is connected)
telepresence version
OSS Client : v2.21.3
OSS Root Daemon: v2.21.3
OSS User Daemon: v2.21.3
Traffic Manager: not connected
(not connected because manager is crashlooping, but version is 2025-03-04 13:11:28.9447 info OSS Traffic Manager v2.21.3 [uid:1000,gid:0)
Operating system of workstation running telepresence commands
macOS 15.3.1, installed via brew telepresence-oss
Kubernetes environment and Version [e.g. Minikube, bare metal, Google Kubernetes Engine]
K8s Server Version: v1.29.11 on AKS
Additional context
Add any other context about the problem here.
Describe the bug
Telepresence runs into a crashloop when a Pod on the cluster specifies a
telepresence.getambassador.io/inject-container-ports
annotation for a nonexistent port.Context: I'm trying to get telepresence working with knative-serving (see also #1029 ) by injecting the agent via the knative Service object to avoid reconcile conflicts with
telepresence intercept
, which fails to adjust the deployment because knative serving will just overwrite it again.However the agent installed by injection alone (
telepresence.getambassador.io/inject-traffic-agent: enabled
) targets the wrong port, so i tried specifying the name of the port on the deployment. This failed catastrophically: traffic-manager is now crashlooping and refusing to return to working order again.Interestingly, none of the pods on the cluster contain an annotation referencing the port mentioned in the log below anymore, nor do any of the configmaps. i'm not sure where telepresence is getting this state from currently. might be retries on the mutating hook, not sure.--> had to remove old knative revision objects to prune the existing deployments/services that still contained the incorrect annotation. interesting that telepresence even considers deployments that are scaled to 0, it seems.
Please use
telepresence loglevel debug
to ensure we have the most helpful logs,reproduce the error, and then run
telepresence gather-logs
to create azip file of all logs for Telepresence's components (root and user daemons,
traffic-manager, and traffic-agents) and attach it to this issue. See an
example command below:
To Reproduce
Steps to reproduce the behavior:
see above
Expected behavior
A clear and concise description of what you expected to happen.
Versions (please complete the following information):
telepresence version
(preferably while telepresence is connected)(not connected because manager is crashlooping, but version is
2025-03-04 13:11:28.9447 info OSS Traffic Manager v2.21.3 [uid:1000,gid:0
)telepresence
commandsmacOS 15.3.1, installed via brew telepresence-oss
K8s
Server Version: v1.29.11
on AKSAdditional context
Add any other context about the problem here.
telepresence_logs_2025-03-04T14:14:17+01:00.zip
The text was updated successfully, but these errors were encountered: