Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

z_ping/pong shm not work #1695

Open
fengtuo58 opened this issue Jan 8, 2025 · 6 comments
Open

z_ping/pong shm not work #1695

fengtuo58 opened this issue Jan 8, 2025 · 6 comments
Labels
invalid This doesn't seem right

Comments

@fengtuo58
Copy link

fengtuo58 commented Jan 8, 2025

Describe the bug

./z_pong --enable-shm
./z_ping --enable-shm 1000
The following output in ping term:
root@ubuntu:/release/examples# ./z_ping --enable-shm 1000
Warming up for 1s...
1000 bytes: seq=0 rtt=127µs lat=63µs
1000 bytes: seq=1 rtt=124µs lat=62µs
1000 bytes: seq=2 rtt=128µs lat=64µs
1000 bytes: seq=3 rtt=237µs lat=118µs
1000 bytes: seq=4 rtt=267µs lat=133µs
1000 bytes: seq=5 rtt=189µs lat=94µs
1000 bytes: seq=6 rtt=148µs lat=74µs
1000 bytes: seq=7 rtt=135µs lat=67µs
1000 bytes: seq=8 rtt=171µs lat=85µs
1000 bytes: seq=9 rtt=150µs lat=75µs
1000 bytes: seq=10 rtt=180µs lat=90µs
1000 bytes: seq=11 rtt=148µs lat=74µs
1000 bytes: seq=12 rtt=142µs lat=71µs
1000 bytes: seq=13 rtt=143µs lat=71µs
1000 bytes: seq=14 rtt=136µs lat=68µs
1000 bytes: seq=15 rtt=142µs lat=71µs
1000 bytes: seq=16 rtt=140µs lat=70µs
1000 bytes: seq=17 rtt=133µs lat=66µs
1000 bytes: seq=18 rtt=149µs lat=74µs
1000 bytes: seq=19 rtt=169µs lat=84µs
1000 bytes: seq=20 rtt=173µs lat=86µs
1000 bytes: seq=21 rtt=175µs lat=87µs
1000 bytes: seq=22 rtt=157µs lat=78µs
1000 bytes: seq=23 rtt=131µs lat=65µs
1000 bytes: seq=24 rtt=157µs lat=78µs
1000 bytes: seq=25 rtt=137µs lat=68µs
1000 bytes: seq=26 rtt=143µs lat=71µs

To reproduce

Use example ping/pong demo with -enable-shm

System info

Linux tegra-ubuntu 5.15.98-rt-tegra #1 SMP PREEMPT_RT Mon Dec 23 18:22:56 CST 2024 aarch64 aarch64 aarch64 GNU/Linux
Zenoh version:
commit 0549678
Author: DenisBiryukov91 [email protected]
Date: Fri Dec 20 15:19:33 2024 +0100
reuse zenoh-examples common arguments parsing in zenoh-ext examples (#1679)

@fengtuo58 fengtuo58 added the bug Something isn't working label Jan 8, 2025
@Mallets Mallets added invalid This doesn't seem right and removed bug Something isn't working labels Jan 9, 2025
@Mallets
Copy link
Member

Mallets commented Jan 9, 2025

From the logs you provided it seems everything is working. Please provide more details.

I'll mark this issue as invalid for the time being.

@fengtuo58
Copy link
Author

the latency should 1-2 us in shm mode ,but the latency is same as no-shm

@Mallets
Copy link
Member

Mallets commented Jan 13, 2025

What's your basis to affirm that latency should be 1-2 us in shm mode? That is not what Zenoh SHM delivers in its current form. The current Zenoh SHM implementation and architecture starts paying off for large payloads (e.g. above ~2KB according to our tests).

Can you test with larger payloads? E.g. 1 MB to see wether you see a difference?

@fengtuo58
Copy link
Author

./z_ping --enable-shm 1000000
Warming up for 1s...
1000000 bytes: seq=0 rtt=1142µs lat=571µs
1000000 bytes: seq=1 rtt=1105µs lat=552µs
1000000 bytes: seq=2 rtt=1099µs lat=549µs
1000000 bytes: seq=3 rtt=1368µs lat=684µs
1000000 bytes: seq=4 rtt=1272µs lat=636µs
1000000 bytes: seq=5 rtt=1144µs lat=572µs
1000000 bytes: seq=6 rtt=1372µs lat=686µs
1000000 bytes: seq=7 rtt=1145µs lat=572µs
1000000 bytes: seq=8 rtt=1248µs lat=624µs
1000000 bytes: seq=9 rtt=1138µs lat=569µs
1000000 bytes: seq=10 rtt=1197µs lat=598µs

./z_ping --enable-shm 10000000
Warming up for 1s...
10000000 bytes: seq=0 rtt=12650µs lat=6325µs
10000000 bytes: seq=1 rtt=13272µs lat=6636µs
10000000 bytes: seq=2 rtt=13015µs lat=6507µs
10000000 bytes: seq=3 rtt=13300µs lat=6650µs
10000000 bytes: seq=4 rtt=13012µs lat=6506µs
10000000 bytes: seq=5 rtt=12690µs lat=6345µs
10000000 bytes: seq=6 rtt=13377µs lat=6688µs
10000000 bytes: seq=7 rtt=14612µs lat=7306µs
10000000 bytes: seq=8 rtt=12535µs lat=6267µs
10000000 bytes: seq=9 rtt=12984µs lat=6492µs
10000000 bytes: seq=10 rtt=14186µs lat=7093µs
10000000 bytes: seq=11 rtt=13165µs lat=6582µs
10000000 bytes: seq=12 rtt=13413µs lat=6706µs
10000000 bytes: seq=13 rtt=12307µs lat=6153µs
10000000 bytes: seq=14 rtt=17048µs lat=8524µs
10000000 bytes: seq=15 rtt=13360µs lat=6680µs
10000000 bytes: seq=16 rtt=13413µs lat=6706µs
10000000 bytes: seq=17 rtt=15679µs lat=7839µs
10000000 bytes: seq=18 rtt=12616µs lat=6308µs
10000000 bytes: seq=19 rtt=12594µs lat=6297µs
10000000 bytes: seq=20 rtt=13513µs lat=6756µs
10000000 bytes: seq=21 rtt=13630µs lat=6815µs
10000000 bytes: seq=22 rtt=13269µs lat=6634µs
10000000 bytes: seq=23 rtt=14053µs lat=7026µs
10000000 bytes: seq=24 rtt=13007µs lat=6503µs
10000000 bytes: seq=25 rtt=12639µs lat=6319µs

Scout show as following:
Hello { zid: b8f666c7dd50c294f7ac64d2252f, whatami: Peer, locators: [tcp/[fe80::22:23ff:fe01:203]:45907, tcp/192.168.195.3:45907, tcp/192.168.30.42:45907] }
Hello { zid: cb30b1d0b89bd8cbfcd747bf7f617538, whatami: Peer, locators: [tcp/[fe80::22:23ff:fe01:203]:44047, tcp/192.168.195.3:44047, tcp/192.168.30.42:44047] }
I don’t think the shm mode is working

@Mallets
Copy link
Member

Mallets commented Jan 13, 2025

Please run the examples with RUST_LOG=debug env variable and post the logs.

@fengtuo58
Copy link
Author

fengtuo58 commented Jan 14, 2025

RUST_LOG=debug ./z_pong --enable-shm
2025-01-14T00:53:08.663853Z DEBUG main ThreadId(01) zenoh::api::session: Config: Config(Config { id: e94566d44c37c3e82542eb6fd4b327f6, metadata: Null, mode: None, connect: ConnectConfig { timeout_ms: None, endpoints: Unique([]), exit_on_failure: None, retry: None }, listen: ListenConfig { timeout_ms: None, endpoints: Dependent(ModeValues { router: Some([tcp/[::]:7447]), peer: Some([tcp/[::]:0]), client: None }), exit_on_failure: None, retry: None }, open: OpenConf { return_conditions: ReturnConditionsConf { connect_scouted: None, declares: None } }, scouting: ScoutingConf { timeout: None, delay: None, multicast: ScoutingMulticastConf { enabled: None, address: None, interface: None, ttl: None, autoconnect: None, listen: None }, gossip: GossipConf { enabled: None, multihop: None, autoconnect: None } }, timestamping: TimestampingConf { enabled: None, drop_future_timestamp: None }, queries_default_timeout: None, routing: RoutingConf { router: RouterRoutingConf { peers_failover_brokering: None }, peer: PeerRoutingConf { mode: None } }, aggregation: AggregationConf { subscribers: [], publishers: [] }, qos: QoSConfig { publication: PublisherQoSConfList([]) }, transport: TransportConf { unicast: TransportUnicastConf { open_timeout: 10000, accept_timeout: 10000, accept_pending: 100, max_sessions: 1000, max_links: 1, lowlatency: false, qos: QoSUnicastConf { enabled: true }, compression: CompressionUnicastConf { enabled: false } }, multicast: TransportMulticastConf { join_interval: Some(2500), max_sessions: Some(1000), qos: QoSMulticastConf { enabled: false }, compression: CompressionMulticastConf { enabled: false } }, link: TransportLinkConf { protocols: None, tx: LinkTxConf { sequence_number_resolution: U32, lease: 10000, keep_alive: 4, batch_size: 65535, queue: QueueConf { size: QueueSizeConf { control: 1, real_time: 1, interactive_high: 1, interactive_low: 1, data_high: 2, data: 4, data_low: 2, background: 1 }, congestion_control: CongestionControlConf { drop: CongestionControlDropConf { wait_before_drop: 1000, max_wait_before_drop_fragments: 50000 }, block: CongestionControlBlockConf { wait_before_close: 5000000 } }, batching: BatchingConf { enabled: true, time_limit: 1 } }, threads: 3 }, rx: LinkRxConf { buffer_size: 65535, max_message_size: 1073741824 }, tls: TLSConf { root_ca_certificate: None, listen_private_key: None, listen_certificate: None, enable_mtls: None, connect_private_key: None, connect_certificate: None, verify_name_on_connect: None, close_link_on_expiration: None, so_sndbuf: None, so_rcvbuf: None, root_ca_certificate_base64: None, listen_private_key_base64: None, listen_certificate_base64: None, connect_private_key_base64: None, connect_certificate_base64: None }, tcp: TcpConf { so_sndbuf: None, so_rcvbuf: None }, unixpipe: UnixPipeConf { file_access_mask: None } }, shared_memory: ShmConf { enabled: true }, auth: AuthConf { usrpwd: UsrPwdConf { user: None, password: None, dictionary_file: None }, pubkey: PubKeyConf { public_key_pem: None, private_key_pem: None, public_key_file: None, private_key_file: None, key_size: None, known_keys_file: None } } }, adminspace: AdminSpaceConf { enabled: false, permissions: PermissionsConf { read: true, write: false } }, downsampling: [], access_control: AclConfig { enabled: false, default_permission: Deny, rules: None, subjects: None, policies: None }, plugins_loading: PluginsLoading { enabled: false, search_dirs: LibSearchDirs([Spec(LibSearchSpec { kind: CurrentExeParent, value: None }), Path("."), Path("~/.zenoh/lib"), Path("/opt/homebrew/lib"), Path("/usr/local/lib"), Path("/usr/lib")]) }, plugins: Object {} })
2025-01-14T00:53:08.669064Z DEBUG main ThreadId(01) zenoh::net::runtime: Zenoh Rust API v1.0.0-dev-404-gacbeb4bb-modified
2025-01-14T00:53:08.669083Z INFO main ThreadId(01) zenoh::net::runtime: Using ZID: e94566d44c37c3e82542eb6fd4b327f6
2025-01-14T00:53:08.669110Z DEBUG main ThreadId(01) zenoh::net::routing::interceptor::access_control: Access control is disabled
2025-01-14T00:53:08.672111Z DEBUG main ThreadId(01) zenoh_shm::posix_shm::segment: Created SHM segment, size: 24, prefix: auth, id: 2708305545
2025-01-14T00:53:08.672274Z DEBUG main ThreadId(01) zenoh::net::routing::hat::p2p_peer::gossip: [Gossip] Add node (self) e94566d44c37c3e82542eb6fd4b327f6
2025-01-14T00:53:08.672321Z DEBUG main ThreadId(01) zenoh::net::routing::router: New Face{0, e94566d44c37c3e82542eb6fd4b327f6}
2025-01-14T00:53:08.672391Z DEBUG main ThreadId(01) zenoh::net::runtime::orchestrator: Try to add listener: tcp/[::]:0: ConnectionRetryConf { exit_on_failure: true, period_init_ms: 1000, period_max_ms: 4000, period_increase_factor: 2.0 }
2025-01-14T00:53:08.672787Z DEBUG main ThreadId(01) zenoh::net::runtime::orchestrator: Listener added: tcp/[::]:44925
2025-01-14T00:53:08.673157Z INFO main ThreadId(01) zenoh::net::runtime::orchestrator: Zenoh can be reached at: tcp/[fe80::22:23ff:fe01:203]:44925
2025-01-14T00:53:08.673190Z INFO main ThreadId(01) zenoh::net::runtime::orchestrator: Zenoh can be reached at: tcp/192.168.195.3:44925
2025-01-14T00:53:08.673228Z INFO main ThreadId(01) zenoh::net::runtime::orchestrator: Zenoh can be reached at: tcp/192.168.30.42:44925
2025-01-14T00:53:08.673376Z DEBUG main ThreadId(01) zenoh::net::runtime::orchestrator: UDP port bound to 224.0.0.224:7446
2025-01-14T00:53:08.673456Z DEBUG main ThreadId(01) zenoh::net::runtime::orchestrator: Joined multicast group 224.0.0.224 on interface 192.168.195.3
2025-01-14T00:53:08.673466Z INFO main ThreadId(01) zenoh::net::runtime::orchestrator: zenohd listening scout messages on 224.0.0.224:7446
2025-01-14T00:53:08.673561Z DEBUG main ThreadId(01) zenoh::net::runtime::orchestrator: UDP port bound to 192.168.195.3:49346
2025-01-14T00:53:08.673741Z DEBUG net-0 ThreadId(03) zenoh::net::runtime::orchestrator: Waiting for UDP datagram...
2025-01-14T00:53:09.174263Z DEBUG main ThreadId(01) zenoh::net::routing::dispatcher::resource: Register resource test/pong
2025-01-14T00:53:09.174434Z DEBUG main ThreadId(01) zenoh::net::routing::dispatcher::interests: Face{0, e94566d44c37c3e82542eb6fd4b327f6} Declare interest 3 (test/pong)
2025-01-14T00:53:09.174472Z DEBUG main ThreadId(01) zenoh::net::routing::hat::p2p_peer::interests: Propagate DeclareFinal Face{0, e94566d44c37c3e82542eb6fd4b327f6}:3
2025-01-14T00:53:09.174489Z DEBUG main ThreadId(01) zenoh::net::routing::dispatcher::pubsub: Face{0, e94566d44c37c3e82542eb6fd4b327f6} Declare subscriber 4 (test/ping)
2025-01-14T00:53:09.174500Z DEBUG main ThreadId(01) zenoh::net::routing::dispatcher::resource: Register resource test/ping

/release/examples# RUST_LOG=debug ./z_ping --enable-shm 100000
2025-01-14T00:53:27.485482Z DEBUG main ThreadId(01) zenoh::api::session: Config: Config(Config { id: 1762ea6b9372f782997352ae42be98c5, metadata: Null, mode: None, connect: ConnectConfig { timeout_ms: None, endpoints: Unique([]), exit_on_failure: None, retry: None }, listen: ListenConfig { timeout_ms: None, endpoints: Dependent(ModeValues { router: Some([tcp/[::]:7447]), peer: Some([tcp/[::]:0]), client: None }), exit_on_failure: None, retry: None }, open: OpenConf { return_conditions: ReturnConditionsConf { connect_scouted: None, declares: None } }, scouting: ScoutingConf { timeout: None, delay: None, multicast: ScoutingMulticastConf { enabled: None, address: None, interface: None, ttl: None, autoconnect: None, listen: None }, gossip: GossipConf { enabled: None, multihop: None, autoconnect: None } }, timestamping: TimestampingConf { enabled: None, drop_future_timestamp: None }, queries_default_timeout: None, routing: RoutingConf { router: RouterRoutingConf { peers_failover_brokering: None }, peer: PeerRoutingConf { mode: None } }, aggregation: AggregationConf { subscribers: [], publishers: [] }, qos: QoSConfig { publication: PublisherQoSConfList([]) }, transport: TransportConf { unicast: TransportUnicastConf { open_timeout: 10000, accept_timeout: 10000, accept_pending: 100, max_sessions: 1000, max_links: 1, lowlatency: false, qos: QoSUnicastConf { enabled: true }, compression: CompressionUnicastConf { enabled: false } }, multicast: TransportMulticastConf { join_interval: Some(2500), max_sessions: Some(1000), qos: QoSMulticastConf { enabled: false }, compression: CompressionMulticastConf { enabled: false } }, link: TransportLinkConf { protocols: None, tx: LinkTxConf { sequence_number_resolution: U32, lease: 10000, keep_alive: 4, batch_size: 65535, queue: QueueConf { size: QueueSizeConf { control: 1, real_time: 1, interactive_high: 1, interactive_low: 1, data_high: 2, data: 4, data_low: 2, background: 1 }, congestion_control: CongestionControlConf { drop: CongestionControlDropConf { wait_before_drop: 1000, max_wait_before_drop_fragments: 50000 }, block: CongestionControlBlockConf { wait_before_close: 5000000 } }, batching: BatchingConf { enabled: true, time_limit: 1 } }, threads: 3 }, rx: LinkRxConf { buffer_size: 65535, max_message_size: 1073741824 }, tls: TLSConf { root_ca_certificate: None, listen_private_key: None, listen_certificate: None, enable_mtls: None, connect_private_key: None, connect_certificate: None, verify_name_on_connect: None, close_link_on_expiration: None, so_sndbuf: None, so_rcvbuf: None, root_ca_certificate_base64: None, listen_private_key_base64: None, listen_certificate_base64: None, connect_private_key_base64: None, connect_certificate_base64: None }, tcp: TcpConf { so_sndbuf: None, so_rcvbuf: None }, unixpipe: UnixPipeConf { file_access_mask: None } }, shared_memory: ShmConf { enabled: true }, auth: AuthConf { usrpwd: UsrPwdConf { user: None, password: None, dictionary_file: None }, pubkey: PubKeyConf { public_key_pem: None, private_key_pem: None, public_key_file: None, private_key_file: None, key_size: None, known_keys_file: None } } }, adminspace: AdminSpaceConf { enabled: false, permissions: PermissionsConf { read: true, write: false } }, downsampling: [], access_control: AclConfig { enabled: false, default_permission: Deny, rules: None, subjects: None, policies: None }, plugins_loading: PluginsLoading { enabled: false, search_dirs: LibSearchDirs([Spec(LibSearchSpec { kind: CurrentExeParent, value: None }), Path("."), Path("~/.zenoh/lib"), Path("/opt/homebrew/lib"), Path("/usr/local/lib"), Path("/usr/lib")]) }, plugins: Object {} })
2025-01-14T00:53:27.487113Z DEBUG main ThreadId(01) zenoh::net::runtime: Zenoh Rust API v1.0.0-dev-404-gacbeb4bb-modified
2025-01-14T00:53:27.487128Z INFO main ThreadId(01) zenoh::net::runtime: Using ZID: 1762ea6b9372f782997352ae42be98c5
2025-01-14T00:53:27.487180Z DEBUG main ThreadId(01) zenoh::net::routing::interceptor::access_control: Access control is disabled
2025-01-14T00:53:27.490041Z DEBUG main ThreadId(01) zenoh_shm::posix_shm::segment: Created SHM segment, size: 24, prefix: auth, id: 1507422289
2025-01-14T00:53:27.490260Z DEBUG main ThreadId(01) zenoh::net::routing::hat::p2p_peer::gossip: [Gossip] Add node (self) 1762ea6b9372f782997352ae42be98c5
2025-01-14T00:53:27.490372Z DEBUG main ThreadId(01) zenoh::net::routing::router: New Face{0, 1762ea6b9372f782997352ae42be98c5}
2025-01-14T00:53:27.490429Z DEBUG main ThreadId(01) zenoh::net::runtime::orchestrator: Try to add listener: tcp/[::]:0: ConnectionRetryConf { exit_on_failure: true, period_init_ms: 1000, period_max_ms: 4000, period_increase_factor: 2.0 }
2025-01-14T00:53:27.490694Z DEBUG main ThreadId(01) zenoh::net::runtime::orchestrator: Listener added: tcp/[::]:44047
2025-01-14T00:53:27.491058Z INFO main ThreadId(01) zenoh::net::runtime::orchestrator: Zenoh can be reached at: tcp/[fe80::22:23ff:fe01:203]:44047
2025-01-14T00:53:27.491075Z INFO main ThreadId(01) zenoh::net::runtime::orchestrator: Zenoh can be reached at: tcp/192.168.195.3:44047
2025-01-14T00:53:27.491086Z INFO main ThreadId(01) zenoh::net::runtime::orchestrator: Zenoh can be reached at: tcp/192.168.30.42:44047
2025-01-14T00:53:27.491253Z DEBUG main ThreadId(01) zenoh::net::runtime::orchestrator: UDP port bound to 224.0.0.224:7446
2025-01-14T00:53:27.491273Z DEBUG main ThreadId(01) zenoh::net::runtime::orchestrator: Joined multicast group 224.0.0.224 on interface 192.168.195.3
2025-01-14T00:53:27.491282Z INFO main ThreadId(01) zenoh::net::runtime::orchestrator: zenohd listening scout messages on 224.0.0.224:7446
2025-01-14T00:53:27.491322Z DEBUG main ThreadId(01) zenoh::net::runtime::orchestrator: UDP port bound to 192.168.195.3:57617
2025-01-14T00:53:27.491500Z DEBUG net-0 ThreadId(03) zenoh::net::runtime::orchestrator: Waiting for UDP datagram...
2025-01-14T00:53:27.491555Z DEBUG net-0 ThreadId(03) zenoh::net::runtime::orchestrator: Try to connect to peer e94566d44c37c3e82542eb6fd4b327f6 via any of [tcp/[fe80::22:23ff:fe01:203]:44925, tcp/192.168.195.3:44925, tcp/192.168.30.42:44925]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
invalid This doesn't seem right
Projects
None yet
Development

No branches or pull requests

2 participants