You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I haven't been able to reproduce the issue in a 4.11.17 cluster, although I have a couple of thoughts.
I recommend you to use increase concurrency/procs rather than increasing the number of parallel connections beyond 250 aprox as I have seen that wrk doesn't perform very well when the number of connections if very high.
What ingress-perf prints is the aggregated value of some of the metrics observed across all client pod replicas, for that reason printing the output of every pod in the screen can be very noisy... but thinking it twice, I think we can add this output in a "trace" log level.
You should find out the raw metrics (including a per pod break down) in the corresponding document of elasticsearch instance
ingress-perf
version or commit IDHAProxy version 2.6.13-234aa6d 2023/05/02
Describe the bug
Run ingress-test with 800 connections on 9 worker nodes, the first reencrypt sample had latency as 0. I reproduced this twice.
To Reproduce
Steps to reproduce the behavior:
Expected behavior
Now we only output Rps, avgLatency and P99Latency
ingress-perf/pkg/runner/exec.go
Line 91 in d80287c
Screenshots or output
First reproduce
Partial of the second reproduce
The text was updated successfully, but these errors were encountered: