Skip to content

Commit

Permalink
Added user documentation
Browse files Browse the repository at this point in the history
  • Loading branch information
CihatAltiparmak committed Aug 14, 2024
1 parent 5e4e2d2 commit 02b23e5
Show file tree
Hide file tree
Showing 4 changed files with 49 additions and 0 deletions.
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ This middleware benchmark tool aims to measure middleware effects on various sce

* [Perception Pipeline](./docs/scenarios/perception_pipeline_benchmark.md)
* [Basic Service Client Works](./docs/scenarios/basic_service_client_benchmark.md)
* [Moveit Task Constructor Pick-Place Task](./docs/scenarios/moveit_task_construtor_benchmark.md)

## Getting Started

Expand Down
4 changes: 4 additions & 0 deletions docs/how_to_run.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,3 +94,7 @@ For instance, the selected test_case includes 20 goal poses. These 20 goals is s
This benchmark measures the total elapsed time based on the time interval between sending the request by the client to the server and getting the response of server. This benchmark utilizes the [ros2/demos](https://github.com/ros2/demos) packages' [example server](https://github.com/ros2/demos/blob/rolling/demo_nodes_cpp/src/services/add_two_ints_server.cpp).

In this benchmark scenario, the benchmarker node only has client interface. The necessary server for this client is run in [the launch file of this benchmark scenario](../launch/scenario_basic_service_client_benchmark.launch.py). Client sends a request to server and waits for the response from server. Client sends second request to server once the client receives response of first request from client. This actions are repeated `sending_request_number` times. You can configure this `sending_request_number` parameter in [this scenario's launch file]((../launch/scenario_basic_service_client_benchmark.launch.py)).

### [MoveIt Task Constructor Benchmark](scenarios/moveit_task_construtor_benchmark.md)

This benchmark measures the effect of middleware against moveit_task_constructor scenarios. Pick-Place task demo is selected for this benchmarking scenario. In this scenario, firstly demo scene is spawned for pick place task to be operated successfully and then the pick place task is initialized. After initializing task, [moveit_task_constructor](https://github.com/moveit/moveit_task_constructor/blob/ros2) creates the plan. If moveit_task_constructor creates the plan successfully, then the plan is executed by [moveit_task_constructor](https://github.com/moveit/moveit_task_constructor/blob/ros2). In the end, demo scene is destroyed so that the benchmarks are able to be conducted from initial state again. This stuff provides us to conduct same benchmark more than ones. Thus, the reliability of benchmark results can be increased. The codes of pick-place task demo is based on [this implementation](https://github.com/moveit/moveit_task_constructor/blob/ros2/demo/src/pick_place_task.cpp) in [moveit_task_constructor](https://github.com/moveit/moveit_task_constructor/blob/ros2).
44 changes: 44 additions & 0 deletions docs/scenarios/moveit_task_construtor_benchmark.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,44 @@
## How To Run the MoveIt Task Constructor Benchmark

Firstly, source your ros version. It's suggested to test with rolling version of ROS.

For instance, to test with rmw_zenoh, start to zenoh router using following command in the terminal.
```sh
# go to your workspace
cd ws
# Be sure that ros2 daemon is killed.
pkill -9 -f ros && ros2 daemon stop
# Then start zenoh router
source /opt/ros/rolling/setup.bash
source install/setup.bash
export RMW_IMPLEMENTATION=rmw_zenoh_cpp
ros2 run rmw_zenoh_cpp rmw_zenohd
```

Select your rmw_implementation as `rmw_zenoh_cpp` and run the moveit task constructor benchmark launch file in the another terminal.
```sh
# go to your workspace
cd ws
source /opt/ros/rolling/setup.bash
source install/setup.bash
export RMW_IMPLEMENTATION=rmw_zenoh_cpp # select your rmw_implementation to benchmark
ros2 launch moveit_middleware_benchmark scenario_moveit_task_constructor_benchmark.launch.py
```

It will be defaultly benchmarked with 20 repetitions. It will be created the json file named `middleware_benchmark_results.json` for benchmarking results after finishing benchmark code execution. You can see the benchmark results in more detail inside this json file.

If you want to customize your benchmark arguments or select different test case, you can use below command.

```shell
ros2 launch moveit_middleware_benchmark scenario_moveit_task_constructor_benchmark.launch.py benchmark_command_args:="--benchmark_out=middleware_benchmark_results.json --benchmark_out_format=json --benchmark_repetitions=1"
```

## How to benchmark the MoveIt Task Constructor

The main idea here is to setup demo scene for operating task, to do planning task followed by executing task. After all of that, it's destroyed demo scene to conduct the benchmark of same task more than once. Thus, The effect of middleware on pick-place task demo can be measured the elapsed time reliably.

![caption](../videos/moveit_task_constructor_benchmark.webm)

## How to create test cases

You can apply some settings on pick-place task demo using [this parameter file in config directory](../../config/pick_place_demo_configs.yaml). In this benchmark scenario, it's enough to change `--benchmark-repetitions` through `benchmark_command_args` argument.
Binary file not shown.

0 comments on commit 02b23e5

Please sign in to comment.