Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

/cache is not successful at warming the cache #346

Open
kingdonb opened this issue Dec 10, 2022 · 1 comment
Open

/cache is not successful at warming the cache #346

kingdonb opened this issue Dec 10, 2022 · 1 comment

Comments

@kingdonb
Copy link
Collaborator

The /cache endpoint that Heroku was hitting every hour is broken, doesn't work anymore.

We also don't have a facility like "heroku scheduler" to hourly hit the cache to ensure it gets warmed every hour.

So when you first hit the site, it will probably be cold and take a few seconds longer to respond, or require a second request – but we could use a Kubernetes CronJob to resolve this pretty easily, I think.

It would be worthwhile to collect some metrics about cold and warm startup – the server is not serverless but AIUI the new bit.io database host is, which is why it's so cheap. (I was not about to pay Heroku $5/mo for databases when we're doing less than hundreds of thousands of queries per month. My colleagues from Computer Science House created this service and it's essentially free for up to a billion queries per month. So far it's working great!)

@kingdonb
Copy link
Collaborator Author

kingdonb commented Dec 11, 2022

From:

It's great! And it's an example of ruby fibers for concurrency (I needed for my process to wait around 4 hours between pulling the current set of promises and whether they've been finished yet so I started with a script that exits when it's done, calls sleep for 4 hours, and then executes itself again after the wait... but that was clunky and difficult to monitor and test, so I decided to implement fibers with an "hourly heartbeat" to see if the fiber scheduler is still working and a 4-hourly scrape.)

So to the point... I have this other process, which is related to commits.to and has an hourly job

I think rather than using CronJob resources, which are going to use Kubernetes Job we can keep the design simpler (I don't currently manage 1 or more Kubernetes CronJob resources, so it's simpler not to add one then...) by leveraging this process that exists which already has an hourly heartbeat! Just making a note of this here before I forget that connection.

The other half of this problem is just firing up a debugger, and ensuring that we can actually hit the /cache endpoint without crashing the request. I had a stack dump but I don't have it handy at this moment.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant