Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Refreshed token doesn't contain capabilities for all channels #55

Open
sacOO7 opened this issue Dec 5, 2024 · 29 comments · May be fixed by #56
Open

Refreshed token doesn't contain capabilities for all channels #55

sacOO7 opened this issue Dec 5, 2024 · 29 comments · May be fixed by #56
Assignees
Labels
bug Something isn't working. It's clear that this does need to be fixed.

Comments

@sacOO7
Copy link
Collaborator

sacOO7 commented Dec 5, 2024

  • Customer has reported an issue where refreshed token doesn't contain all channels, causing existing channels to reconnect again.

The current issue is that when Ably hits the token expiry time, it makes multiple token refresh calls. Although the current token has all the necessary capabilities, the server sometimes responds with a token containing only the channel_name provided in the payload. This causes the token to be marked as invalid, which then triggers another refresh cycle to obtain a token with proper capabilities.

┆Issue is synchronized with this Jira Bug by Unito

@sacOO7 sacOO7 self-assigned this Dec 5, 2024
@sacOO7 sacOO7 added the bug Something isn't working. It's clear that this does need to be fixed. label Dec 5, 2024
@sacOO7
Copy link
Collaborator Author

sacOO7 commented Dec 5, 2024

A temporary solution for now would be to set maximum ABLY_TOKEN_EXPIRY in .env file.

ABLY_TOKEN_EXPIRY= 86400  // 1 day ( 24 * 60 * 60 ) seconds

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Dec 18, 2024

@sacOO7 sacOO7 linked a pull request Dec 18, 2024 that will close this issue
@graphem
Copy link

graphem commented Jan 8, 2025

Yeah seems I am experiecing this and this is killing my server, as all the clients seems to make tons of request again.

@graphem
Copy link

graphem commented Jan 8, 2025

@sacOO7 Any news on merging this and doing a release?

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Jan 9, 2025

Currently, this is kept on hold and will be picked up soon, can you try increasing TOKEN_EXPIRY for the time being to say 6 hrs, wdyt

@graphem
Copy link

graphem commented Jan 9, 2025

Ok thanks! Seems the token expiry is not making a difference so maybe my issues is something different.

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Jan 9, 2025

@graphem okay, make sure to report whatever issue you are facing 👍
Thanks !

@graphem
Copy link

graphem commented Jan 9, 2025

@sacOO7 Essentially I am using Laravel Echo with Ably and I am using the listen method on channels, but lately I have observed a spike in traffic on the website and after investiguation it comes for tons of call to /broadcasting/auth which is the endpoint to get the Ably token. But it is getting out of control, like I am getting 100s of calls per seconds from single client, like there are stuck in a loop, so I was curious if this issue was related with token expiration and having client on a loop getting the token over and over. This right now puts a lot of pressure on our system, it looks almost like a DDOS but with just normal user traffic since all our route are auth protected so only login user can access the web app.

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Jan 9, 2025

@graphem can you log userId or ip address of users requesting tokens?
Also, if you dont mind, how many users use your app at a time? and how many private/presence channels they use on average?

@graphem
Copy link

graphem commented Jan 9, 2025

I am not using the presence channels. I am using very few channels basically the app might have up to 30 different private channel and might have 2-3 public channels. And we are running like 5 apps per server. I am going to try to log the entries to get more details but I was simply observing the access logs and it was just insane traffic. We have around 2000 users concurrent per servers right now. We use decent baremetal servers which can take the traffic but this is not sustainable right now as we grow.

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Jan 9, 2025

Okay, it will be great if you can analyse logs and check which requests users are sending consistently?
I am confident you must have battle tested app on the local. It will also be great to analyse if requests are coming from mobile browsers that are causing any such issues? I think you can also use available AI tools to analyse logs.

@graphem
Copy link

graphem commented Jan 9, 2025

Yes it is a tough one cause we are not able to reproduce locally our in our staging environment. I am going to analyse the log further today and see if I can find more info

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Jan 9, 2025

@graphem thanks. We will be eagerly waiting for your analysis 👍

@graphem
Copy link

graphem commented Jan 9, 2025

Is there a way to send private files as I don't want to share sensitive info here

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Jan 9, 2025

Sure, you can open ticket at https://ably.com/support and share information there 👍

@graphem
Copy link

graphem commented Jan 9, 2025

Ok cool I already have a ticket, I will gather more info

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Jan 9, 2025

Cool 👍

@graphem
Copy link

graphem commented Jan 9, 2025

Actually seems the token expiry seems to make a difference today, so it might be that issue

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Jan 9, 2025

Okay ... I genuinely think it should.
Just make sure you set it correctly ABLY_TOKEN_EXPIRY in .env file as per README file

@graphem
Copy link

graphem commented Jan 9, 2025

Ok I changed the settings on all my app that seems to do the trick, will have confirmation tomorrow when the app is more busy.

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Jan 10, 2025

Great 👍
you can also try increasing the timeout value to say 12 or 24 hrs. Let us know about your findings.

@graphem
Copy link

graphem commented Jan 10, 2025

@sacOO7 What is the setting for the timeout?

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Jan 10, 2025

I didn't get your question? You mean token expiry or something else

@graphem
Copy link

graphem commented Jan 10, 2025

Great 👍 you can also try increasing the timeout value to say 12 or 24 hrs. Let us know about your findings.

You mentioned a timeout value here

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Jan 10, 2025

Ohh I meant this -> #55 (comment)
I meant, client token will expire/timeout after given expiry : )

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Jan 10, 2025

Btw, were you able to achieve desired behaviour?

@graphem
Copy link

graphem commented Jan 10, 2025

Oh yeah sorry I increased it, our server load is 200% better this morning, this seems to have done the trick

@sacOO7
Copy link
Collaborator Author

sacOO7 commented Jan 10, 2025

Good to hear that 👍
Feel free to raise issue if needed : )

@graphem
Copy link

graphem commented Jan 10, 2025

Ok thanks! Yeah it is much better looking forward for the merge here. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working. It's clear that this does need to be fixed.
Development

Successfully merging a pull request may close this issue.

2 participants