Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add deshred service #40

Open
wants to merge 4 commits into
base: master
Choose a base branch
from
Open

Add deshred service #40

wants to merge 4 commits into from

Conversation

sbs2001
Copy link

@sbs2001 sbs2001 commented Feb 15, 2025

PR depends on jito-labs/mev-protos#46

Added a grpc server, where clients can subscribe and listen to entries deshredded from the shreds received the shredstream proxy.

Example usage

  1. Enable the deshred server by providing the --deshred-listen-address "127.0.0.1:50051 parameter
RUST_LOG=info cargo run  --bin jito-shredstream-proxy -- shredstream     --block-engine-url "https://mainnet.block-engine.jito.wtf"     --auth-keypair "key.json"     --desired-regions "amsterdam,ny"     --dest-ip-ports "127.0.0.1:8001,10.0.0.1:8001"     --deshred-listen-address "127.0.0.1:50051"
  1. Use any grpc client to subscribe for deshreded entries. A very simple client is included in examples.
cd examples/deshred/ && cargo run .

> 

Received entry: Entry { num_hashes: 1665, hash: [45, 116, 87, 36, 101, 43, 53, 65, 38, 194, 113, 5, 156, 74, 63, 33, 130, 164,
171, 21, 225, 157, 165, 135, 147, 214, 61, 12, 240, 84, 193, 35], transactions: [[1, 47, 233, 54, 64, 51, 114, 236, 75, 248, 12
2, 3, 50, 117, 61, 94, 147, 51, 122, 236, 151, 224, 25, 28, 201, 166, 56, 54, 33, 79, 100, 94, 216, 183, 12, 167, 21, 109, 64,
39, 253, 126, 185, 13, 132, 118, 146, 225, 34, 217, 70, 38, 158, 5, 163, 24, 99, 1, 184, 26, 3, 197, 212, 247, 9, 1, 0, 5, 8, .......

How it works

  1. Packet to Shred: The proxy receives packets. Each packet is attempted to be deserialized into a Shred first. This is copied from https://github.com/anza-xyz/agave/blob/3dccb3e785ce8e7fc8370f983c81ee9cf4326de5/core/src/window_service.rs#L321

  2. Shreds to Entry: Shreds are made from Entries. See anza code on how shreds are made here

We go from shreds to entry by collecting only unique data shreds into buckets labeled by the slot number. After inserting a shred, we check whether it's slot's bucket is full. A bucket is full if the bucket has a shred marked with last_in_slot and the bucket has total shreds equal to id of it's last_in_slot shred.

Such a full bucket is deserialized into a a Entry object.

A subscribed client receives such live entry objects.

@sbs2001 sbs2001 changed the title Deshred Add deshred service Feb 15, 2025
Signed-off-by: Shivam Sandbhor <[email protected]>
Comment on lines 76 to 103
if last_shred.0.last_in_slot() && slot_bucket.len() == (last_shred.0.index() + 1) as usize{
debug!("deshredding slot {:?}", slot);
let shreds = slot_bucket.iter().map(|shred| shred.0.clone()).collect_vec();

let data = Shredder::deshred(shreds.as_slice());
if data.is_err() {
debug!("failed to deshred shreds {:?}", data.err());
return;
}
let data = data.unwrap();
let entries: Result<Vec<Entry>, _> = bincode::deserialize(&data);
if entries.is_ok() {
let entries = entries.unwrap();
debug!("deshredded entries {:?}", entries);
if entry_sender.receiver_count() > 0 {
for entry in entries {
let send_result = entry_sender.send(DeshredEntry{
num_hashes: entry.num_hashes,
hash: entry.hash.to_bytes().to_vec(),
// tbd: is bincode supported in other languages ?
transactions: entry.transactions.iter().map(|t| bincode::serialize(t).unwrap()).collect_vec(),
});
if send_result.is_err() {
error!("failed to send deshredded entry to deshred server {:?}", send_result.err());
}
}
}
shred_bucket_by_slot.remove(&slot);
Copy link
Author

@sbs2001 sbs2001 Feb 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential memory leak here if a slot bucket never fills. Not sure if that ever happens and how common it is.

Simple solution would be to drop the bucket after every interval regardless of it being filled.

Changed the submodule path to my repo.
Ran Cargo fmt.
Fixed the missing Arc import in heartbeat.rs
@sbs2001
Copy link
Author

sbs2001 commented Feb 19, 2025

@segfaultdoc I added 9a21c91 to address the review

@sbs2001
Copy link
Author

sbs2001 commented Feb 24, 2025

Just bumping this up

@esemeniuc
Copy link
Collaborator

@sbs2001 mind if i modify your code in another branch?

@sbs2001
Copy link
Author

sbs2001 commented Feb 25, 2025

@sbs2001 mind if i modify your code in another branch?

no problem :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants