-
Notifications
You must be signed in to change notification settings - Fork 32
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rate exceeded #6
Comments
Hi, If you want a more robust solution, check my fork. My application is producing huge amount of logs, and I made several changes trying to cope with error conditions. This commit introduces two methods to deal with ThrottlingException. There are pros and cons, see the commit message. Note that my fork should be considered an experimental code, tested only with ExAws (as opposed to the AWS Elixir library used for the original design). Good luck! |
Hello thank you so much. I'll test and I'll let you know. |
Hi @jhonathas, My apologies for the late response but I have been on holiday for the past few weeks! In addition to what @pmenhart already said, any of these errors should obviously not crash your application. Since the library is running as a Thanks! |
I have the same issue, I changed the buffer size but now the Logger process is stucked. |
So this looks to happen because of an unmatched error in the do_flush function. Adding
technically solves the problem but it creates a new issue. The default settings for the Logger switches from async mode to synchronous when more than 20 messages queue up. With those defaults, the above code looks to eventually get all of the messages through but the requests/responses are delayed until the logs can be sent. This is obviously an improvement over a hard lock but still not ideal. Increasing the max buffer size and the sync_threshold helps keep everything running more smoothly but obviously comes with increased memory usage. A more long term solution would be to handle the error in a different manner. Perhaps the backend could temporarily increase the buffer size when it receives a rate limit message and start discarding messages once it's hit the max buffer limit and is still receiving throttling messages. |
Hi @dimun and @jschniper, My apologies for the late reply. Unfortunately I no longer have the time to actively work on this library nor do I still have it in active use myself. If any of you are able to submit a pull request, I will happily review it, test, and submit a new version to hex.pm however. Otherwise I'm afraid I cannot be of more help at this point. |
Hello, I had this problem here, and I do not know what I can do to solve it.
It looks like I've exceeded the limit, but I do not have a boundary value set.
Process #PID<0.20400.23> terminating: {:EXIT, {{:case_clause, {:error, {"ThrottlingException", "Rate exceeded"}}}, [{CloudWatch, :flush, 2, [file: 'lib/cloud_watch.ex', line: 93]}, {:gen_event, :server_update, 4, [file: 'gen_event.erl', line: 577]}, {:gen_event, :server_notify, 4, [file: 'gen_event.erl', line: 559]}, {:gen_event, :handle_msg, 6, [file: 'gen_event.erl', line: 300]}, {:proc_lib, :init_p_do_apply, 3, [file: 'proc_lib.erl', line: 249]}]}}
Apparently, my application went offline because of this error. Can this happen or does the dependency work in isolation and an error returned from Amazon would not knock out an application?
The text was updated successfully, but these errors were encountered: