Author : Mattia Musumeci [email protected]
This is the second assignment developed for the Experimental Robotics Laboratory course of the University of Genoa.
At this link it is possible to find the documentation for the software contained in this repository.
The scenario involves a robot deployed in an indoor environment for surveillance purposes whose objective is to visit the different environment locations and explore them for a given amount of time. In this context, the robot in equipped with a rechargeable battery which needs to be recharged in a specific location when required.
The robot is equipped with a robot arm connected to a camera at the end-effector. That is because, initially, the robot has no information about the environment which it is in and it has to use the camera to identify markers that are placed in its proximity and that give information about the planimetry of the environment.
The software contained in this repository has been developed for ROS Noetic 1.15.9.
The Robot Operating System (ROS) is an open-source, meta-operating system for your robot. It provides the services you would expect from an operating system, including hardware abstraction, low-level device control, implementation of commonly-used functionality, message-passing between processes, and package management. It also provides tools and libraries for obtaining, building, writing, and running code across multiple computers. The full installation of ROS contains also Gazebo which is the most common 3-dimensional physical simulator used in robotics and Rviz which is a visualization widget for the simulation: together, these two tools allow the user to simulate a specific robot in a specific environment but also to view the simulated robot and environment, analyse log and replay sensor information.
The behavior of the robot has been described with a finite state machine developed for SMACH.
The SMACH library is a task level architecture for describing and executing complex behaviors. At its core, it is a ROS-independent python library to build hierarchical state machines and the interaction with the ROS environment is obtained by implementing dedicated states inside the finite state machine.
The ontology concepts and reasoning have been implemented with ARMOR.
A ROS Multi-Ontology Reference (ARMOR) is a powerful and versatile management system for single and multi-ontology architectures under ROS. It allows to load, query and modify multiple ontologies and requires very little knowledge of OWL APIs and Java.
The markers used in this project are the ArUco Markers. The ArUco Markers which are synthetic square markers that are made up of a wide black border and an inner binary matrix that identifies the marker. The black border helps the marker to be quickly detected in an image, and the binary coding allows for error detection and correction. The size of the marker determines the size of the internal matrix. For example, a marker with a size of 4x4 has an internal matrix composed of 16 bits.
For detecting the markers, the OpenCV library has been used. The Open Source Computer Vision (OpenCV) library is a free and open-source computer vision and machine learning software library. It is aimed at real-time computer vision and has a comprehensive set of algorithms for tasks such as image processing, object detection, and recognition, video analysis, and machine learning.
In the following it shown an image of the environment the robot is placed in.
The robot used in the simulation is a differential robot equipped with a simple robotic arm with only three joints, a camera placed at the end-effector and a laser scanner. The following image shows the rendering of robot.
The robot description is located in this folder.
In the following images are shown the types and placement of the markers that have been used in the simulation: as can be seen, all the markers are in the same location. The robot will be spawned at the center of that location so that it can identify all the markers by just controlling the joints of the robotic arm.
Another image of the same location from a different perspective:
The software contained in this repository is highly dependant on the architecture developed in the first assignment which can be found in this github repository. After correctly following the INSTALLATION AND RUNNING section of the previous assignment it is possible to follow this installation.
The software contained in this repository is a ROS package.
Therefore, in order to install the software, it is necessary to create a workspace.
Notice that it is also possible to use an already existing workspace.
mkdir -p [workspace_name]/src
Then, clone this repository inside the src folder just created:
cd [workspace_name]/src/
git clone [this_repo_link] .
Then, rebuild the workspace by returning to the workspace folder:
cd ..
catkin_make
The setup.bash file must be sourced so that ROS can find the workspace.
To do this, the following line must be added at the end of the .bashrc file:
source [workspace_folder]/devel/setup.bash
export PYTHONPATH=$PYTHONPATH:[workspace_folder]/src
In order to run the scripts, it is necessary to first run the ROS master.
Open a new console and run the following command:
roscore
Some launch files have been prepared in order to simplify the execution.
Into different terminals, run the following commands:
roslaunch final_assignment simulation_enviornment.launch
roslaunch final_assignment armor_builder.launch
roslaunch final_assignment robot_surveillance.launch
In the component diagram are shown all the blocks and interfaces that have been used or developed in order to obtain the desired software architecture.
- The
marker_server
nodes provides the necessary information regarding a room through the related ArUco marker id. It interacts with:- The
marker_detector
node through the /room_info service.
- The
- The
marker_detector
node performs the preliminary inspection routine to obtain all the necessary ArUco markers id. Then, the ids are used to obtain the information about the topology of the environment. It interacts with:- The
marker_server
node through the /room_info service. - The
robot_inspection_routine
node through the /robot_inspection_routine action. - The
ontology_map_builder
node through the /ontology_map/build_map action.
- The
- The
robot_inspection_routine
node makes the arm of the robot rotate in circles at diffent pitches so that all the ArUco markers around the robot are scanned correctly. It interacts with:- The
marker_detector
node through the /robot_inspection_routine action.
- The
- The
ontology_map_builder
node loads the default ontology into ARMOR and builds the map following the user requests. It also contains a mapping between each room and its position with respect to the world frame. Notice that in this context, the "position of a room" is defined as a point inside the room that the robot is able to reach. It interacts with:- The
armor_service
library through the /armor_interface_srv service. - The
marker_detector
node through the /ontology_map/build_map action. - The
robot_behavior
node through the /ontology_map/reference_name service. - The
robot_behavior
node through the /ontology_map/room_position service.
- The
- The
motion_controller
node controls the movement of the robot. It interacts with:- The
robot_behaviour
node through the /follow_path message. - The
move_base
node through the /move_base action.
- The
- The
move_base
node makes the robot move to a goal pose. It interacts with:- The
motion_controller
node through the /move_base action.
- The
The remaining nodes of the architecture are explained in the README of the previous assignment. A more detailed explanation of the use of the interfaces is available here.
This sequence diagram shows a possible execution of the software contained in this repository. More in details, this diagram shows the execution in time of all the nodes and the requests/responses between them.
Notice that this diagram only shows the beginning of the execution, ence, the detection of the ArUco markers and the building of the ontology. That is because the remaining part of the diagram is the same shown in the README of the previous assignment.
This first horizonal line shows that there can be multiple iterations of performing the inspection routine and then communicating the markers id to the marker server. That is because at the ispection routine the robot might not have detected all the markers or some of them might be wrong.
The second horizonal line shows the end of this sequence diagram and the begin of the sequence diagram shown in the repository of the previous assignment.
In order to develop the interfaces between the components:
- The ontology_map_builder node which:
- Provides the
/ontology_map/reference_name
service, of typeReferenceName.srv
, to provide the reference name of the ontology that is loaded into ARMOR. This is done only once the ontology is fully created and loaded. - Provides the
/ontology_map/room_position
service, of typeRoomPosition.srv
, to provide a position inside the requested room. The position is measured with respect to the world frame. - Subscribes to the
/ontology_map/build_map
topic, of typeOntologyMap.msg
, to received the completed odometry of the environment through the message format.
- Provides the
- The motion_controller node which:
- Creates a
/follow_path
action server, of typeFollowPath.action
, to make the follow a path composed of a ordered list of waypoints. - Subscribes to the
/odom
topic, of typeOdometry
, to retrieve the current position of the rotot. - Creates a client for the
/move_base
action server, of typeMoveBase.action
to make the robot move between two waypoints of the path.
- Creates a
- The planner_node node which:
- Creates a client for the
/compute_path
action server, of typeComputePathAction.action
, for computing a path of waypoints from a start to a goal position.
- Creates a client for the
- The robot_inspection_routine node which:
- Subscribes to the
/joint0_position_controller/command
topic, of typeFloat64
, for controlling the first joint of the robot arm. - Subscribes to the
/joint1_position_controller/command
topic, of typeFloat64
, for controlling the second joint of the robot arm. - Subscribes to the
/camera_position_controller/command
topic, of typeFloat64
, for controlling the camera joint of the robot arm. - Creates a
/robot_inspection_routine
action server, of typeRobotInspectionRoutine.action
, to make the robot move the arm is a spherical pattern.
- Subscribes to the
- The marker_detector node which:
- Subscribes to the
/camera/image_raw
topic, of typeImage.msg
, for receiving the camera images and retrieving any eventual ArUco marker inside the image. - Publishes to the
/cmd_vel
topic, of typeTwist.msg
, for updating the velocity of the robot. This is used for making the robot stand still while it is obtaining for the markers. - Publishes to the
/ontology_map/build_map
topic, of typeOntologyMap.msg
, for publishing all the necessary information regarding the ontology that needs to be loaded on ARMOR. - Uses the
/room_info
service, of typeRoomInformation.srv
, for requesting the information about a room which is related to an ArUco marker id.
- Subscribes to the
The ontology_map_builder
node uses the following parameters:
- /ontology_reference (string) : The reference name of the ontology.
- /ontology_path (string) : The global path of the default ontology.
- /ontology_uri (string) : The uri of the ontology.
- /rooms (list) : The list of rooms names for building the map.
- /rooms_doors (list) : At index i, the list of doors belonging to room i.
- /rooms_positions (list) : At index i, the position of room i.
- /robot_room (string) : The initial room at which the robot is located.
The motion_controller
node uses the following parameters:
- /goal_threshold (float) : The distance from goal at which the robot is considered to be arrived at goal.
The marker_detector
node uses the following parameters:
- /markers_count (int) : The number of markers to detect.
In this first gif it is possible to observe how the robot is able to detect the ArUco markers around it self: the arm is composed by a rotational joint connected the main chassis which controls the yaw of the arm, and another rotational joint which controls the pitch of the camera.
The robot performs yaw rotations of the arm from -π to π at different camera pitches.
It is supposed that the number of placed ArUco markers is known a priori: the robot continues to perform these rotations, which are referred to as "robot inspection routines" until, all the markers are located correctly.
In fact, it may happen that a marker is not detected correctly and a wrong id is obtained: the robot simply discards the value and continues to scan the environment. This behavior can be observed at the end of the gif when the markers 147 and 148 are discarded.
In this second gif instead, it is possible to observe that, after all the markers have been correctly detected, the robot starts to move in the environment following the behaviour described in the previous assignment.
These are some of the possible improvements that can be carried on this project:
- Currently the robotic arm placed on the robot chassis moves almost instantly from one configuration to another, this causes some problems with the physics simulation. This problems might be solveed by better tuning the parametes, using a different PID controller, or using another type controller for the arm.
- It might happen that the move_base node does not always move the robot is the best possible way and the robot might get stuck on some walls. This problems might be solved by better tuning the move_base parameters or use another node for moving the robot.