Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(engine): mutex free subscription handling #1076

Merged
merged 24 commits into from
Feb 20, 2025

Conversation

StarpTech
Copy link
Collaborator

@StarpTech StarpTech commented Feb 17, 2025

This PR removes all mutexes needed and uses channels to synchronize access in subscription handling. Every subscription spawns a single goroutine where the fetch and resolve process is run (connection write). Passing a task to a subscription is done on a buffered channel and on the event loop (single threaded). In that way, we can guarantee that we send messages in the order they were received, no matter how high the concurrency is. Before, it could happen that events arrived out of order. By using a buffered channel, we can increase the throughput of how many updates can be offloaded to a subscription worker while other subscriptions can continue. This implementation make use of the natural back-pressure behaviour of go channels. A MaxSubscriptionFetchTimeout is added to ensure that a single subscription fetch can't block the event loop forever. This timeout is configurable and set to 30s by default.

Semaphores have been deleted as well because we now naturally limit the throughput by writing to subscriptions sequentially via channels. Additionally, heartbeat handling is done in the worker goroutine of every subscription. This simplified the implementation and removed another synchronization need.

We also massively improved the way synchronous subscriptions are managed. There is no difference in handling anymore, except that for SSE/Multipart, we wait until the client request is done or the subscription is completed and drained.

I did a few benchmarks with 200 concurrent subscriptions and around 10 triggers. Writing to all subscriptions was still pretty fast (<1ms, ~3ms Max). The next iteration of the implementation should focus on leveraging concurrency better. We should handle triggers individually so that the blast radius of a fast producer and slow writer and can't impact others. Today, a fast producer will result in increased latency but more stable CPU utilization due to backpressure.

Fixes ENG-6513

@StarpTech StarpTech requested a review from jensneuse February 17, 2025 23:20
@StarpTech StarpTech marked this pull request as ready for review February 18, 2025 19:25
@jensneuse
Copy link
Member

I answered my own change requests with a draft for improvements, except one where I'm not sure:
#1080

jensneuse and others added 7 commits February 19, 2025 12:35
When a origin completes a subscription, we drain inflight events before
closing the connection to the client. In addition, we're only sending
the complete event when all events were handled.
Copy link
Contributor

@alepane21 alepane21 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really nice improvement! I only saw a field left from the refactoring to remove (lastWrite).

@StarpTech StarpTech merged commit 21be4ab into master Feb 20, 2025
10 checks passed
@StarpTech StarpTech deleted the dustin/lock-free-subscriptions branch February 20, 2025 21:43
StarpTech added a commit that referenced this pull request Feb 20, 2025
🤖 I have created a release *beep* *boop*
---


##
[2.0.0-rc.157](v2.0.0-rc.156...v2.0.0-rc.157)
(2025-02-20)


### Features

* **engine:** mutex free subscription handling
([#1076](#1076))
([21be4ab](21be4ab))


### Bug Fixes

* fix values validation list compatibility check
([#1082](#1082))
([541be0d](541be0d))

---
This PR was generated with [Release
Please](https://github.com/googleapis/release-please). See
[documentation](https://github.com/googleapis/release-please#release-please).
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants