-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Wide range of throughput numbers using my large test case #240
Comments
There can be many reasons why throughput fluctuates. If you give me the name and region of your function app, and a start/end time for the experiment(s), I can a look at our internal telemetry to see if I can determine a cause. If I were to just guess, my guess would be that it has to do with the load balancing of activities between nodes. This may or may not be the actual reason. One thing you can try is to turn off the load balancing altogether, using the setting |
Thanks I'll try the "Local" setting later and report any differences. |
Sorry, I decided I should keep this one open until after I've had a chance to test with the 1.4.0 version. |
The new infamous test case I'm using for Netherite that launches now 200 orchestrators and 200 activities each, seems to report widely varying throughput numbers, even after scale up.
I'm wondering if something is working differently than expected.
So over the course of my looping sample, if I start from idle, I'd expect to see slower throughput, then faster and faster, then some sort of stable or close-to-stable number of items processed/second.
What I'm seeing varies quite widely, from 14000/second then sometimes down to 2000/second and less.
See below... This is a test of 40000 items (200/200) in a loop, so the invocations are right after each other.
My next version will be so good as to log the wall time, sorry about that.
Included are a few other numbers to perhaps help the analysis.
Min and max orchestrator Dequeue delay are the times that it takes between the orchestrator being initiated from StartSubOrchestrator and the time the orchestrator code starts to run.
Similarly with Min and Max activity dequeue delay.
Out dequeue delay is the time from the return statement in the orchestrator or activity to the time the parent code gets control.
Is it expected that the times should vary so widely? I'm using a copy of the benchmark host.json in my tests.
The text was updated successfully, but these errors were encountered: