From 78fd3e72cdde5918f8f44ca01018651fb04b9083 Mon Sep 17 00:00:00 2001 From: changsh726 Date: Thu, 24 Sep 2020 16:55:23 +0800 Subject: [PATCH] Docs: move links in howto docs to relative paths --- docs/howto/how_to_add_a_new_lidar_driver.md | 46 ++++++------- .../howto/how_to_add_a_new_lidar_driver_cn.md | 44 ++++++------ .../how_to_add_an_external_dependency.md | 12 ++-- docs/howto/how_to_build_and_run_python_app.md | 4 +- docs/howto/how_to_build_your_own_kernel.md | 2 +- .../how_to_debug_dreamview_start_problem.md | 2 +- ...e_local_map_for_MSF_localization_module.md | 4 +- ...ocal_map_for_MSF_localization_module_cn.md | 2 +- docs/howto/how_to_launch_and_run_apollo.md | 2 +- docs/howto/how_to_leverage_scenario_editor.md | 32 ++++----- ...updated_apollo_master_from_old_versions.md | 12 ++-- ...alization_module_on_your_local_computer.md | 18 ++--- ...zation_module_on_your_local_computer_cn.md | 2 +- ...zation_module_on_your_local_computer_cn.md | 2 +- ...alization_module_on_your_local_computer.md | 6 +- ...zation_module_on_your_local_computer_cn.md | 2 +- .../howto/how_to_run_map_verification_tool.md | 2 +- ...quential_obstacle_perception_visualizer.md | 24 +++---- ...ntial_obstacle_perception_visualizer_cn.md | 12 ++-- ...erception_module_on_your_local_computer.md | 8 +-- docs/howto/how_to_setup_dual_ipc.md | 68 +++++++++---------- docs/howto/how_to_tune_control_parameters.md | 2 +- .../how_to_tune_control_parameters_cn.md | 4 +- .../how_to_use_apollo_2.5_navigation_mode.md | 58 ++++++++-------- ...ow_to_use_apollo_2.5_navigation_mode_cn.md | 36 +++++----- 25 files changed, 203 insertions(+), 203 deletions(-) diff --git a/docs/howto/how_to_add_a_new_lidar_driver.md b/docs/howto/how_to_add_a_new_lidar_driver.md index f2a7a800282..9a636d935d9 100644 --- a/docs/howto/how_to_add_a_new_lidar_driver.md +++ b/docs/howto/how_to_add_a_new_lidar_driver.md @@ -19,16 +19,16 @@ modules/drivers/velodyne/proto/velodyne.proto ``` modules/drivers/proto/pointcloud.proto ``` - + 3. [Compensator](https://github.com/ApolloAuto/apollo/tree/master/modules/drivers/velodyne/compensator): Compensator takes pointcloud data and pose data as inputs. Based on the corresponding pose information for each cloud point, it converts each cloud point information aligned with the latest time in the current lidar scan frame, minimizing the motion error due the movement of the vehicle. Thus, each cloud point needs carry its own timestamp information. ## Steps to add a new Lidar driver -#### 1. Get familiar with Apollo Cyber RT framework. +#### 1. Get familiar with Apollo Cyber RT framework. + +Please refer to the [manuals of Apollo Cyber RT](../cyber/README.md). -Please refer to the [manuals of Apollo Cyber RT](https://github.com/ApolloAuto/apollo/tree/master/docs/cyber). - #### 2. Define message for raw data Apollo already define the format of pointcloud. For new lidar, you only need to define the protobuf message for the raw scannning data. Those raw data will be used for archive and offline development. Compared to processed pointcloud data, raw data saves a lot of storage spaces for long term. The new message of the scan data can be define as below: @@ -45,14 +45,14 @@ message ScanData { } ``` -In velodyne driver, the scan data message is define as [VelodyneScan](https://github.com/ApolloAuto/apollo/blob/master/modules/drivers/velodyne/proto/velodyne.proto#L29). - +In velodyne driver, the scan data message is define as [VelodyneScan](../../modules/drivers/velodyne/proto/velodyne.proto#L29). + #### 3. Access the raw data Each seconds, Lidar will generate a lot of data, so it relied on UDP to efficiently transport the raw data. You need to create a DriverComponent class, which inherits the Component withotu any parameter. In its Init function, you need to start a async polling thread, whic will receive Lidar data from the specific port. Then depending on the Lidar's frequency, the DriverComponent needs to package all the packets in a fix period into a frame of ScanData. Eventually, the writer will send the ScanData through a corresponding channel. ```c++ -// Inherit component with no template parameters, +// Inherit component with no template parameters, // do not receive message from any channel class DriverComponent : public Component<> { public: @@ -62,8 +62,8 @@ class DriverComponent : public Component<> { this->Poll(); })); } - - private: + + private: void Poll() { while (apollo::cyber::Ok()) { // poll data from port xxx @@ -74,11 +74,11 @@ class DriverComponent : public Component<> { writer_.write(scan); } } - + std::shared_ptr poll_thread_; std::shared_ptr> writer_; }; - + CYBER_REGISTER_COMPONENT(DriverComponent) ``` @@ -87,7 +87,7 @@ CYBER_REGISTER_COMPONENT(DriverComponent) If the new lidar driver already provides the pointcloud data in Cartesian coordinates system, then you just need to store those data in the protobuf format defined in Apollo. The Parser converts the lidar raw data to the pointcloud format in Cartesian coordinates system. Parser takes ScanData as input. For each cloud point, it will parse the timestamp, x/y/z coordinates and intensity, then packages all the cloudpoint information into a frame of pointcloud. Each cloud point transformed into the FLU (Front: x, Left: y, Up: z)coordinates with Lidar as the origin point. - + ```c++ message PointXYZIT { optional float x = 1 [default = nan]; @@ -97,7 +97,7 @@ message PointXYZIT { optional uint64 timestamp = 5 [default = 0]; } ``` - + Then you need to create a new ParserComponent,which inherits Components templates with ScanData. ParserComponent takes ScanData as input, then generates pointcloud message and sents it out. ```c++ @@ -107,26 +107,26 @@ class ParserComponent : public Component { bool Init() override { ... } - + bool Proc(const std::shared_ptr& scan_msg) override { // get a pointcloud object from objects pool - auto point_cloud_out = point_cloud_pool_->GetObject(); + auto point_cloud_out = point_cloud_pool_->GetObject(); // clear befor using - point_cloud_out->clear(); + point_cloud_out->clear(); // parse scan data and generate pointcloud parser_->parse(scan_msg, point_cloud_out); // write pointcloud to a specific channel writer_->write(point_cloud); } - + private: std::shared_ptr> writer_; std::unique_ptr parser_ = nullptr; - - std::shared_ptr> point_cloud_pool_ = nullptr; + + std::shared_ptr> point_cloud_pool_ = nullptr; int pool_size_ = 8; }; - + CYBER_REGISTER_COMPONENT(ParserComponent) ``` @@ -137,7 +137,7 @@ Motion compensation is optional depends on lidar hardware design. E.g. if the th #### 6. Configure the dag file After done with each component, you just need to configure the DAG config file to add each component into the data processing pipeline. E.g. lidar_driver.dag: - + ```python # Define all coms in DAG streaming. module_config { @@ -150,7 +150,7 @@ module_config { } } } - + module_config { module_library : "/apollo/bazel-bin/modules/drivers/xxx/parser/libxxx_parser_component.so" components { @@ -162,7 +162,7 @@ module_config { } } } - + module_config { module_library : "/apollo/bazel-bin/modules/drivers/xxx/compensator/libxxx_compensator_component.so" components { diff --git a/docs/howto/how_to_add_a_new_lidar_driver_cn.md b/docs/howto/how_to_add_a_new_lidar_driver_cn.md index f36777a8fb2..713a1061394 100644 --- a/docs/howto/how_to_add_a_new_lidar_driver_cn.md +++ b/docs/howto/how_to_add_a_new_lidar_driver_cn.md @@ -18,7 +18,7 @@ Lidar是一种常用的环境感知传感器,利用脉冲激光来照射目标 cyber框架下系统中每一个功能单元都可以抽象为一个component,通过channel相互间进行通信,然后根据dag(有向无环图)配置文件,构建成相应的pipeline,实现数据的流式处理。 #### 2. 消息定义 - + apollo已经预定义了点云的消息格式,所以只需要为新lidar定义一个存储原始扫描数据的proto消息,用于数据的存档和离线开发调试,相比于点云数据,存档原始数据可以大量节省存储空间。一个新的扫描数据消息可以类似如下定义: ```c++ @@ -32,14 +32,14 @@ apollo已经预定义了点云的消息格式,所以只需要为新lidar定义 repeated bytes raw_data = 5; // raw scan data } ``` -在velodyne驱动中,其扫描数据消息定义为[VelodyneScan](https://github.com/ApolloAuto/apollo/blob/master/modules/drivers/velodyne/proto/velodyne.proto#L29). - +在velodyne驱动中,其扫描数据消息定义为[VelodyneScan](../../modules/drivers/velodyne/proto/velodyne.proto#L29). + #### 3. 读取原始数据 lidar每秒会产生大量数据,一般通过UDP协议来进行数据的高效传输。编写一个DriverComponent类,继承于无模版参数Component类;在Init函数中启动一个异步poll线程,不断从相应的端口读取lidar数据;然后根据需求如将一段时间内的数据打包为一帧ScanData,如扫描一圈为一帧;最后通过writer将ScanData写至相应的channel发送出去。 ```c++ -// Inherit component with no template parameters, +// Inherit component with no template parameters, // do not receive message from any channel class DriverComponent : public Component<> { public: @@ -49,8 +49,8 @@ class DriverComponent : public Component<> { this->Poll(); })); } - - private: + + private: void Poll() { while (apollo::cyber::Ok()) { // poll data from port xxx @@ -61,18 +61,18 @@ class DriverComponent : public Component<> { writer_.write(scan); } } - + std::shared_ptr poll_thread_; std::shared_ptr> writer_; }; - + CYBER_REGISTER_COMPONENT(DriverComponent) ``` #### 4. 解析扫描数据,生成点云。 编写一个Parser类,输入为一帧ScanData,根据lidar自己的数据协议,解析出每一个点的时间戳,x/y/z三维坐标,以及反射强度,并组合成一帧点云。每个点都位于以lidar为原点的FLU(Front: x, Left: y, Up: z)坐标系下。 - + ```c++ message PointXYZIT { optional float x = 1 [default = nan]; @@ -82,7 +82,7 @@ message PointXYZIT { optional uint64 timestamp = 5 [default = 0]; } ``` - + 然后定义一个ParserComponent,继承于ScanData实例的Component模板类。接收ScanData消息,生成点云消息,发送点云消息。 ```c++ @@ -92,36 +92,36 @@ class ParserComponent : public Component { bool Init() override { ... } - + bool Proc(const std::shared_ptr& scan_msg) override { // get a pointcloud object from objects pool - auto point_cloud_out = point_cloud_pool_->GetObject(); + auto point_cloud_out = point_cloud_pool_->GetObject(); // clear befor using - point_cloud_out->clear(); + point_cloud_out->clear(); // parse scan data and generate pointcloud parser_->parse(scan_msg, point_cloud_out); // write pointcloud to a specific channel writer_->write(point_cloud); } - + private: std::shared_ptr> writer_; std::unique_ptr parser_ = nullptr; - - std::shared_ptr> point_cloud_pool_ = nullptr; + + std::shared_ptr> point_cloud_pool_ = nullptr; int pool_size_ = 8; }; - + CYBER_REGISTER_COMPONENT(ParserComponent) ``` #### 5. 对点云进行运行补偿 运动补偿是一个通用的点云处理过程,可以直接复用velodyne driver中compensator模块的算法逻辑。 - + #### 6. 配置dag文件 - + 将各个数据处理环节定义为component后,需要将各个component组成一个lidar数据处理pipeline,如下配置lidar_driver.dag: - + ```python # Define all coms in DAG streaming. module_config { @@ -134,7 +134,7 @@ module_config { } } } - + module_config { module_library : "/apollo/bazel-bin/modules/drivers/xxx/parser/libxxx_parser_component.so" components { @@ -146,7 +146,7 @@ module_config { } } } - + module_config { module_library : "/apollo/bazel-bin/modules/drivers/xxx/compensator/libxxx_compensator_component.so" components { diff --git a/docs/howto/how_to_add_an_external_dependency.md b/docs/howto/how_to_add_an_external_dependency.md index d84f0497a17..e592c5c60a9 100644 --- a/docs/howto/how_to_add_an_external_dependency.md +++ b/docs/howto/how_to_add_an_external_dependency.md @@ -1,7 +1,7 @@ # How to Add a New External Dependency The bazel files about third-party dependencies are all in the folder -[third_party](https://github.com/ApolloAuto/apollo/blob/master/third_party) +[third_party](../../third_party) which has a structure as following. ```shell @@ -68,7 +68,7 @@ def repo(): It's pretty common to do so. But it needs very solid knowledge with bazel. -[workspace.bzl](https://github.com/ApolloAuto/apollo/blob/master/third_party/yaml_cpp/workspace.bzl): +[workspace.bzl](../../third_party/yaml_cpp/workspace.bzl): ```python load("@bazel_tools//tools/build_defs/repo:http.bzl", "http_archive") @@ -88,7 +88,7 @@ def repo(): ) ``` -[yaml.BUILD](https://github.com/ApolloAuto/apollo/blob/master/third_party/yaml_cpp/yaml.BUILD): +[yaml.BUILD](../../third_party/yaml_cpp/yaml.BUILD): ```python load("@rules_cc//cc:defs.bzl", "cc_library") @@ -123,7 +123,7 @@ For example, - [Poco](https://github.com/pocoproject/poco) -[workspace.bzl](https://github.com/ApolloAuto/apollo/blob/master/third_party/poco/workspace.bzl): +[workspace.bzl](../../third_party/poco/workspace.bzl): ```python def clean_dep(dep): @@ -137,7 +137,7 @@ def repo(): ) ``` -[poco.BUILD](https://github.com/ApolloAuto/apollo/blob/master/third_party/poco/poco.BUILD): +[poco.BUILD](../../third_party/poco/poco.BUILD): ```python load("@rules_cc//cc:defs.bzl", "cc_library") @@ -164,7 +164,7 @@ as they are in the system path. For all of the above types of external dependencies, we also need to add them into -[tools/workspace.bzl](https://github.com/ApolloAuto/apollo/blob/master/tools/workspace.bzl) +[tools/workspace.bzl](../../tools/workspace.bzl) ## References diff --git a/docs/howto/how_to_build_and_run_python_app.md b/docs/howto/how_to_build_and_run_python_app.md index 439c29d05b6..911e92cfbc0 100644 --- a/docs/howto/how_to_build_and_run_python_app.md +++ b/docs/howto/how_to_build_and_run_python_app.md @@ -54,8 +54,8 @@ py_test( ``` Above is a BUILD file template, you can also use the -[BUILD](https://github.com/ApolloAuto/apollo/blob/master/cyber/python/BUILD) and -[BUILD](https://github.com/ApolloAuto/apollo/blob/master/cyber/python/examples/BUILD) +[BUILD](../../cyber/python/BUILD) and +[BUILD](../../cyber/python/cyber_py3/examples/BUILD) file as examples. ## Build, Test and Run commands diff --git a/docs/howto/how_to_build_your_own_kernel.md b/docs/howto/how_to_build_your_own_kernel.md index b27ab6efdc1..57552833994 100644 --- a/docs/howto/how_to_build_your_own_kernel.md +++ b/docs/howto/how_to_build_your_own_kernel.md @@ -13,4 +13,4 @@ cd apollo-kernel 2. Add the ESD CAN driver source code according to [ESDCAN-README.md](https://github.com/ApolloAuto/apollo-kernel/blob/master/linux/ESDCAN-README.md). 3. Build the kernel with the following command: ```bash build.sh``` -4. Install the kernel the same way as using a pre-built [Apollo Kernel](https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_install_apollo_kernel.md). +4. Install the kernel the same way as using a pre-built [Apollo Kernel](how_to_install_apollo_kernel.md). diff --git a/docs/howto/how_to_debug_dreamview_start_problem.md b/docs/howto/how_to_debug_dreamview_start_problem.md index 81b45f080bf..600bf83ba59 100644 --- a/docs/howto/how_to_debug_dreamview_start_problem.md +++ b/docs/howto/how_to_debug_dreamview_start_problem.md @@ -123,7 +123,7 @@ in_dev_docker:/apollo$ ./apollo.sh build_no_perception dbg ``` 2. Compile pcl and copy the pcl library files to `/usr/local/lib`: -See [/apollo/WORKSPACE.in](https://github.com/ApolloAuto/apollo/blob/master/WORKSPACE.in) to identify your pcl library version: +See [/apollo/WORKSPACE.in](../../WORKSPACE.in) to identify your pcl library version: - Prior to Apollo 5.0 (inclusive): pcl-1.7 - After Apollo 5.0: pcl-1.9 diff --git a/docs/howto/how_to_generate_local_map_for_MSF_localization_module.md b/docs/howto/how_to_generate_local_map_for_MSF_localization_module.md index 94edf2037e8..8ffd1719406 100644 --- a/docs/howto/how_to_generate_local_map_for_MSF_localization_module.md +++ b/docs/howto/how_to_generate_local_map_for_MSF_localization_module.md @@ -2,7 +2,7 @@ ## Prerequisites - Download source code of Apollo from [GitHub](https://github.com/ApolloAuto/apollo) - - Follow the tutorial to set up [docker environment](https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_build_and_release.md). + - Follow the tutorial to set up [docker environment](../quickstart/apollo_software_installation_guide.md). - ~~Download localization data from the [Multi-Sensor Fusion Localization Data](http://data.apollo.auto/help?name=sensor%20data&data_key=multisensor&data_type=1&locale=en-us&lang=en)(US only).~~ - Download localization dataset: please contact Yao Zhou, zhouyao@baidu.com, to request the dataset. Requests need contain the following: (1) Email address and affiliation (business or school); (2) Application purpose. @@ -34,4 +34,4 @@ After the script is finished, you can find the produced localization map named * The scripts also stores the visualization of each generated map node in the map's subfolder named `image`. The visualization of a map node filled with LiDAR data looks like this: -![1](images/msf_localization/map_node_image.png) \ No newline at end of file +![1](images/msf_localization/map_node_image.png) diff --git a/docs/howto/how_to_generate_local_map_for_MSF_localization_module_cn.md b/docs/howto/how_to_generate_local_map_for_MSF_localization_module_cn.md index 5251f4d426a..57aafeb1c2e 100644 --- a/docs/howto/how_to_generate_local_map_for_MSF_localization_module_cn.md +++ b/docs/howto/how_to_generate_local_map_for_MSF_localization_module_cn.md @@ -4,7 +4,7 @@ ## 1. 事先准备 - 从[GitHub网站](https://github.com/ApolloAuto/apollo)下载Apollo源代码 - - 按照[教程](https://github.com/ApolloAuto/apollo/blob/master/README.md)设置Docker环境 + - 按照[教程](../quickstart/apollo_software_installation_guide.md)设置Docker环境 - ~~从[Apollo数据平台](http://data.apollo.auto/?name=sensor%20data&data_key=multisensor&data_type=1&locale=en-us&lang=en)的“多传感器融合定位数据”栏目下载多传感器融合定位demo数据包(仅限美国地区)。~~ - 下载数据集: 请发邮件至*zhouyao@baidu.com*来申请数据。邮件中需要包含以下内容:(1) 你所在的机构名称和邮件地址; (2)数据集使用目的。 diff --git a/docs/howto/how_to_launch_and_run_apollo.md b/docs/howto/how_to_launch_and_run_apollo.md index 02a8786d782..a6a4ecd19a4 100644 --- a/docs/howto/how_to_launch_and_run_apollo.md +++ b/docs/howto/how_to_launch_and_run_apollo.md @@ -73,5 +73,5 @@ different due to frontend code changes.) ### Congrats! You have successfully built Apollo! Now you can revisit -[Apollo Readme](https://github.com/ApolloAuto/apollo/blob/master/README.md) for +[Apollo Readme](../../README.md) for additional guidelines on the neccessary hardware setup. diff --git a/docs/howto/how_to_leverage_scenario_editor.md b/docs/howto/how_to_leverage_scenario_editor.md index 64ac9fa122f..51c66fa0a19 100644 --- a/docs/howto/how_to_leverage_scenario_editor.md +++ b/docs/howto/how_to_leverage_scenario_editor.md @@ -4,13 +4,13 @@ Simulation plays a central role in Apollo’s internal development cycle. Dreamland empowers developers and start-ups to run millions of miles of simulation daily, which dramatically accelerates the development cycle. -So far Apollo simulation allowed external users to access over 200 sample scenarios which includes a diverse range of LogSim scenarios based on real world driving data and WorldSim scenarios that have been manually created by our simulation team. To learn more about Dreamland, refer to [our Dreamland Introduction Guide](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/Dreamland_introduction.md) +So far Apollo simulation allowed external users to access over 200 sample scenarios which includes a diverse range of LogSim scenarios based on real world driving data and WorldSim scenarios that have been manually created by our simulation team. To learn more about Dreamland, refer to [our Dreamland Introduction Guide](../specs/Dreamland_introduction.md) Several developers wrote in requesting that our Dreamland platform should support Scenario Creation and Editing which the Apollo team now proudly presents in Apollo 5.0! ## Setting up Scenario Editor -1. Login to your Dreamland account. For additional details on How to create an account, please refer to [our Dreamland Introduction Guide](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/Dreamland_introduction.md) +1. Login to your Dreamland account. For additional details on How to create an account, please refer to [our Dreamland Introduction Guide](../specs/Dreamland_introduction.md) 2. Once inside the platform, the Scenario Editor can be accessed under `Scenario Management` or using the [following link](https://azure.apollo.auto/scenario-management/scenario-editor) @@ -18,11 +18,11 @@ Several developers wrote in requesting that our Dreamland platform should suppor 3. Once inside, you will have to complete the form on the screen as seen in the image below. As this app is in Beta testing, it is not open to all our developers. - ![](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/form.png) + ![](../../docs/specs/images/form.png) 4. You should receive the following activation confirmation via email within 3 business days: - ![](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/images/email.png) + ![](../../docs/specs/images/email.png) ## Using Scenario Editor @@ -42,7 +42,7 @@ Let's understand each tool along with its purpose ### General Action Tools -The 4 General action tools can be found on the bottom right corner of the map. +The 4 General action tools can be found on the bottom right corner of the map. 1. **Zoom tool**: while you can use your trackpad to zoom in and out of the map, there exists the Zoom tool to help you zoom in and out of the map in case you do not have a trackpad ready. @@ -56,10 +56,10 @@ The 4 General action tools can be found on the bottom right corner of the map. ![](images/se_ruler.png) ![](images/se_distance.png) -4. **Add Route tool**: this tool can be used both for the ego-car as well as the obstacles you set in its path. For the ego-car you can only set its destination, but for obstacles, you can set multiple points that define their driving behavior. +4. **Add Route tool**: this tool can be used both for the ego-car as well as the obstacles you set in its path. For the ego-car you can only set its destination, but for obstacles, you can set multiple points that define their driving behavior. ![](images/se_addroute.png) -  + ### Configuration Tools There are 4 types of configurations that you will need to set up in order to create a scenario, three of which are listed on the left-side of the map - General, Ego-car and Participants (Obstacles) and the last one is Traffic Light @@ -83,8 +83,8 @@ The Ego car's heading can also be set by dragging the arrow linked to the ego ca ![](images/heading.png) ``` -Note: -You can set the ego car’s end point by clicking on the “Add Route Point” icon in the lower right corner of the map. Described in the General Action tools section. +Note: +You can set the ego car’s end point by clicking on the “Add Route Point” icon in the lower right corner of the map. Described in the General Action tools section. ``` Once you have placed the Ego car's end point on the map, The end point coordinates will then appear on the right-hand attribute's window. You can drag the end point flag to change the ego car’s end point location. The “End point” coordinates will be automatically updated accordingly. @@ -95,23 +95,23 @@ Finally, you can always come back and edit the existing attributes of the ego ca #### Participants' Configuration -If you select `Participant` from the configuration menu, you can place your participant in your scenario by clicking on a desired location on the map. You will notice that your mouse pointer will turn into a cross until you place the new participant on your map. Once you place it, a form will appear on the right-hand attributes window as it did with `Ego Car`. Before you edit the fields on the form, you can change the position of the participant by clicking and dragging it. You can also modify its heading by clicking on the arrow head. Once you have finalized the heading and position of your participant, you can start working on specific details mentioned in the form - type, length, speed and motion type. +If you select `Participant` from the configuration menu, you can place your participant in your scenario by clicking on a desired location on the map. You will notice that your mouse pointer will turn into a cross until you place the new participant on your map. Once you place it, a form will appear on the right-hand attributes window as it did with `Ego Car`. Before you edit the fields on the form, you can change the position of the participant by clicking and dragging it. You can also modify its heading by clicking on the arrow head. Once you have finalized the heading and position of your participant, you can start working on specific details mentioned in the form - type, length, speed and motion type. ![](images/obstacle.png) -In the Basic Information section, you will notice an auto-generated ID along with a description textbox. You could give your participant a suitable ID as well as a description about its expected behavior. You could also specify what is your participant's type, which will be set to `Car` by default. Upon selecting a different type, the participant on your screen will change accordingly. You will also need to determine its initial speed and other attributes including width, length and height. There are predetermined values for each vehicle type, which can be changed. +In the Basic Information section, you will notice an auto-generated ID along with a description textbox. You could give your participant a suitable ID as well as a description about its expected behavior. You could also specify what is your participant's type, which will be set to `Car` by default. Upon selecting a different type, the participant on your screen will change accordingly. You will also need to determine its initial speed and other attributes including width, length and height. There are predetermined values for each vehicle type, which can be changed. In the Initial State section, you will need to set the speed of the participant which can be either set in `m/s` or `km/hr`. The coordinates and heading of the participant are preset and can be changed by directly editing the participant's position on the map. -In Runtime Configuration, you can set whether the participant is mobile or static. Should you select static, you have finished setting up your participant and are ready to save. +In Runtime Configuration, you can set whether the participant is mobile or static. Should you select static, you have finished setting up your participant and are ready to save. -If you select mobile instead, you would need to set its `Trigger Type`. Once you have completed your mobile participant setup, click on the `add route point` button to set the participant's trajectory points as seen in the image below. +If you select mobile instead, you would need to set its `Trigger Type`. Once you have completed your mobile participant setup, click on the `add route point` button to set the participant's trajectory points as seen in the image below. ![](images/se_addroute.png) You can set a single destination, or add several points in between. You will also be able to add speed and change the speed of your participant on the form from one point to the next. Also, you can edit the location of the point on the screen by clicking on and dragging it to its desired locaiton. -Finally, if you have added several trajectory points and do not know how to go back to your participant, you can use the `Re-center tool` (which is similar to the General Action re-center tool), but this re-center tool only works for your participants. +Finally, if you have added several trajectory points and do not know how to go back to your participant, you can use the `Re-center tool` (which is similar to the General Action re-center tool), but this re-center tool only works for your participants. ![](images/center2.png) @@ -149,6 +149,6 @@ The minimum requirements of saving a scenario are to configure all required attr ![](images/select_scenario.png) -3. You can then search for your newly created scenario. An easy way to filter your private scenarios is to perform an instance search for your username in the `Search scenarios` field. +3. You can then search for your newly created scenario. An easy way to filter your private scenarios is to perform an instance search for your username in the `Search scenarios` field. -![](images/instance.png) \ No newline at end of file +![](images/instance.png) diff --git a/docs/howto/how_to_migrate_to_the_updated_apollo_master_from_old_versions.md b/docs/howto/how_to_migrate_to_the_updated_apollo_master_from_old_versions.md index 9a4447c63d8..1ec1669e121 100644 --- a/docs/howto/how_to_migrate_to_the_updated_apollo_master_from_old_versions.md +++ b/docs/howto/how_to_migrate_to_the_updated_apollo_master_from_old_versions.md @@ -3,11 +3,11 @@ ## Introduction Due to a fatal bug with Git LFS that has caused restricted access to Apollo repos, we have decided to retire the service from all Apollo repos. On May 14th 2:07 PM Pacific Time, the Apollo Team has completed the migration. We are sorry for any inconveniences this may cause you. -``` +``` Note: If this is the first time you are cloning/building Apollo, you do not need to follow this guide. This guide is for people who had installed Git LFS previously with Apollo. ``` -If this is your first time installing Apollo, please return to the [README](https://github.com/ApolloAuto/apollo/blob/master/README.md) page. +If this is your first time installing Apollo, please return to the [README](../../README.md) page. ## Why did we retire Git LFS @@ -31,7 +31,7 @@ git pull --rebase upstream master #where “upstream” is your defined alias of ``` 3. Hard reset your forked repo: -``` +``` git push -f original master ``` @@ -47,10 +47,10 @@ please cherry-pick those changes to the new repo and submit your commits. ``` Note: -If your repo did not sync with ours, you will still be using the Git LFS service. However, once the service is disabled, there is a high likelihood that your access to Apollo repos will be blocked/denied. To avoid such an incident, please follow the steps listed above that best +If your repo did not sync with ours, you will still be using the Git LFS service. However, once the service is disabled, there is a high likelihood that your access to Apollo repos will be blocked/denied. To avoid such an incident, please follow the steps listed above that best fit your situation. ``` -## Troubleshooting steps +## Troubleshooting steps -If you are still experiencing issues, you can always re-fork the repo. Let us know if you need any assistance on this process by creating an issue. +If you are still experiencing issues, you can always re-fork the repo. Let us know if you need any assistance on this process by creating an issue. diff --git a/docs/howto/how_to_run_MSF_localization_module_on_your_local_computer.md b/docs/howto/how_to_run_MSF_localization_module_on_your_local_computer.md index f00ff13d91e..3f2c823e9b0 100644 --- a/docs/howto/how_to_run_MSF_localization_module_on_your_local_computer.md +++ b/docs/howto/how_to_run_MSF_localization_module_on_your_local_computer.md @@ -2,18 +2,18 @@ ## 1. Preparation - Download source code of Apollo from [GitHub](https://github.com/ApolloAuto/apollo) - - Follow the tutorial to set up [docker environment](https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_build_and_release.md). + - Follow the tutorial to set up [docker environment](../quickstart/apollo_software_installation_guide.md). - Download localization data from [Apollo Data Open Platform](http://data.apollo.auto/?name=sensor%20data&data_key=multisensor&data_type=1&locale=en-us&lang=en)(US only). -the localization data is a experimental dataset to verify the availability of localization. It contains localization map(local_map/), vehicle params(params/), sensor recording data(records/). The specific attributes are as follows: -duration: 5 mins -mileage: 3km -areas: city roads in Sunnyvale -weather: sunny day +the localization data is a experimental dataset to verify the availability of localization. It contains localization map(local_map/), vehicle params(params/), sensor recording data(records/). The specific attributes are as follows: +duration: 5 mins +mileage: 3km +areas: city roads in Sunnyvale +weather: sunny day ## 2. Build Apollo -First check and make sure you are in development docker container before you proceed. Now you will need to build from the source. +First check and make sure you are in development docker container before you proceed. Now you will need to build from the source. ``` # To make sure you start clean bash apollo.sh clean @@ -52,7 +52,7 @@ run the script in apollo directory cyber_launch start /apollo/modules/localization/launch/msf_localization.launch ``` -In /apollo/data/log directory, you can see the localization log files. +In /apollo/data/log directory, you can see the localization log files. - localization.INFO : INFO log - localization.WARNING : WARNING log - localization.ERROR : ERROR log @@ -89,7 +89,7 @@ If everything is fine, you should see this on screen. `Note:` The visualization tool will show up the windows after the localization module started to published localization msgs to topic /apollo/localization/pose. You can use command *cyber_monitor* to monitor the status of topics. -## 7. Stop localization module +## 7. Stop localization module If you record localization result in step 6, you will also need to end the recording process: ``` python /apollo/scripts/record_bag.py --stop diff --git a/docs/howto/how_to_run_MSF_localization_module_on_your_local_computer_cn.md b/docs/howto/how_to_run_MSF_localization_module_on_your_local_computer_cn.md index 68cad84a049..d0a674f948d 100644 --- a/docs/howto/how_to_run_MSF_localization_module_on_your_local_computer_cn.md +++ b/docs/howto/how_to_run_MSF_localization_module_on_your_local_computer_cn.md @@ -4,7 +4,7 @@ ## 1. 事先准备 - 从[GitHub网站](https://github.com/ApolloAuto/apollo)下载Apollo源代码 - - 按照[教程](https://github.com/ApolloAuto/apollo/blob/master/README.md)设置Docker环境 + - 按照[教程](../quickstart/apollo_software_installation_guide.md)设置Docker环境 - 从[Apollo数据平台](http://data.apollo.auto/?name=sensor%20data&data_key=multisensor&data_type=1&locale=en-us&lang=en)下载多传感器融合定位demo数据包(仅限美国地区),使用其中*apollo3.5*文件夹下的数据。 此定位数据为实验性质的demo数据,用于验证定位模块的可用性。数据主要包含定位地图(local_map/), 车辆参数(params/), 传感器数据(records/)。具体属性如下: diff --git a/docs/howto/how_to_run_NDT_localization_module_on_your_local_computer_cn.md b/docs/howto/how_to_run_NDT_localization_module_on_your_local_computer_cn.md index d6be5c04783..14bfaec4b67 100644 --- a/docs/howto/how_to_run_NDT_localization_module_on_your_local_computer_cn.md +++ b/docs/howto/how_to_run_NDT_localization_module_on_your_local_computer_cn.md @@ -4,7 +4,7 @@ ## 1. 事先准备 - 从[GitHub网站](https://github.com/ApolloAuto/apollo)下载Apollo master分支源代码 - - 按照[教程](https://github.com/ApolloAuto/apollo/blob/master/README.md)设置Docker环境并搭建Apollo工程 + - 按照[教程](../quickstart/apollo_software_installation_guide.md)设置Docker环境并搭建Apollo工程 - 从[Apollo数据平台](http://data.apollo.auto/?name=sensor%20data&data_key=multisensor&data_type=1&locale=en-us&lang=en)下载定位数据(仅限美国地区) 此定位数据为实验性质的demo数据,用于验证定位模块的可用性。数据主要包含定位地图(ndt_map/), 车辆参数(params/), 传感器数据(records/)。具体属性如下: diff --git a/docs/howto/how_to_run_RTK_localization_module_on_your_local_computer.md b/docs/howto/how_to_run_RTK_localization_module_on_your_local_computer.md index 4a0fc5e79d5..80e789ef8fe 100644 --- a/docs/howto/how_to_run_RTK_localization_module_on_your_local_computer.md +++ b/docs/howto/how_to_run_RTK_localization_module_on_your_local_computer.md @@ -2,12 +2,12 @@ ## 1. Preparation - Download source code of Apollo from [GitHub](https://github.com/ApolloAuto/apollo) - - Follow the tutorial to set up [docker environment](https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_build_and_release.md). + - Follow the tutorial to set up [docker environment](../quickstart/apollo_software_installation_guide.md). - Download localization data from [Apollo Data Open Platform](http://data.apollo.auto/?name=sensor%20data&data_key=multisensor&data_type=1&locale=en-us&lang=en)(US only). ## 2. Build Apollo -First check and make sure you are in development docker container before you proceed. Now you will need to build from the source. +First check and make sure you are in development docker container before you proceed. Now you will need to build from the source. ``` # To make sure you start clean bash apollo.sh clean @@ -27,7 +27,7 @@ bash apollo.sh build_opt --local_resources 2048,1.0,1.0 cyber_launch start /apollo/modules/localization/launch/rtk_localization.launch ``` -In /apollo/data/log directory, you can see the localization log files. +In /apollo/data/log directory, you can see the localization log files. - localization.INFO : INFO log - localization.WARNING : WARNING log - localization.ERROR : ERROR log diff --git a/docs/howto/how_to_run_RTK_localization_module_on_your_local_computer_cn.md b/docs/howto/how_to_run_RTK_localization_module_on_your_local_computer_cn.md index 09698cc924f..23fb5db2d58 100644 --- a/docs/howto/how_to_run_RTK_localization_module_on_your_local_computer_cn.md +++ b/docs/howto/how_to_run_RTK_localization_module_on_your_local_computer_cn.md @@ -4,7 +4,7 @@ ## 1. 事先准备 - 从[GitHub网站](https://github.com/ApolloAuto/apollo)下载Apollo源代码 - - 按照[教程](https://github.com/ApolloAuto/apollo/blob/master/README.md)设置Docker环境 + - 按照[教程](../quickstart/apollo_software_installation_guide.md)设置Docker环境 - 从[Apollo数据平台](http://data.apollo.auto/?name=sensor%20data&data_key=multisensor&data_type=1&locale=en-us&lang=en)下载多传感器融合定位demo数据包(仅限美国地区),使用其中*apollo3.5*文件夹下的数据。 ## 2. 编译apollo工程 diff --git a/docs/howto/how_to_run_map_verification_tool.md b/docs/howto/how_to_run_map_verification_tool.md index a712d611aa6..53ec83fdbf0 100644 --- a/docs/howto/how_to_run_map_verification_tool.md +++ b/docs/howto/how_to_run_map_verification_tool.md @@ -6,7 +6,7 @@ The Map Data Verification tool is designed to help Apollo developers detect any In order to run your data on this tool, please follow the steps below: -1. Build Apollo as recommended in the [Build Guide](https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_build_and_release.md) until the `./apollo.sh build` step. +1. Build Apollo as recommended in the [Build Guide](../quickstart/apollo_software_installation_guide.md) until the `./apollo.sh build` step. 2. Once instde dev docker and after running `./apollo.sh build` please go to the folder `modules/tools/map_datachecker/` 3. Starting the server: ```bash diff --git a/docs/howto/how_to_run_offline_sequential_obstacle_perception_visualizer.md b/docs/howto/how_to_run_offline_sequential_obstacle_perception_visualizer.md index 2852a09b135..0565301d82d 100644 --- a/docs/howto/how_to_run_offline_sequential_obstacle_perception_visualizer.md +++ b/docs/howto/how_to_run_offline_sequential_obstacle_perception_visualizer.md @@ -1,10 +1,10 @@ # How to Run the Fusion Obstacle Visualization Tool -Apollo created the LiDAR Obstacle Visualization Tool, an offline visualization tool to show LiDAR-based obstacle perception results (see [How to Run Offline Perception Visualizer](https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_run_offline_perception_visualizer.md)). However, the tool lacks the ability to visualize the radar-based obstacle perception results and the fusion results based on its two sensors. +Apollo created the LiDAR Obstacle Visualization Tool, an offline visualization tool to show LiDAR-based obstacle perception results (see [How to Run Offline Perception Visualizer](how_to_run_offline_perception_visualizer.md)). However, the tool lacks the ability to visualize the radar-based obstacle perception results and the fusion results based on its two sensors. Apollo has developed a second visualization tool, the Fusion Obstacle Visualization Tool, to complement the LiDAR Obstacle Visualization Tool. The Fusion Obstacle Visualization Tool shows obstacle perception results from these modules: -- LiDAR-based algorithm module +- LiDAR-based algorithm module - Radar-based algorithm module - Fusion algorithm module for debugging and testing the complete obstacle perception algorithms @@ -52,9 +52,9 @@ bazel build //modules/perception/tool/export_sensor_data:export_sensor_data /apollo/bazel-bin/modules/perception/tool/export_sensor_data/export_sensor_data ``` -3. Play the ROS bag. +3. Play the ROS bag. -​ The default directory of the ROS bag is `/apollo/data/bag`. +​ The default directory of the ROS bag is `/apollo/data/bag`. ​ In the following example, the file name of ROS bag is `example.bag`. ​ Use these commands: @@ -64,13 +64,13 @@ cd /apollo/data/bag rosbag play --clock example.bag --rate=0.1 ``` -To ensure that you do not miss any frame data when performing callbacks to the ROS messages, it is recommended that you reduce the playing rate, which is set to `0.1` in the example above. +To ensure that you do not miss any frame data when performing callbacks to the ROS messages, it is recommended that you reduce the playing rate, which is set to `0.1` in the example above. -When you play the bag, all data files are dumped to the export directory, using the timestamp as the file name, frame by frame. +When you play the bag, all data files are dumped to the export directory, using the timestamp as the file name, frame by frame. The default LiDAR data export directory is `/apollo/data/lidar`. -The radar directory is `/apollo/data/radar`. +The radar directory is `/apollo/data/radar`. The directories can be defined in `/apollo/modules/perception/tool/export_sensor_data/conf/export_sensor_data.flag` using the flags, `lidar_path` and `radar_path`. @@ -99,9 +99,9 @@ bazel build -c opt --cxxopt=-DUSE_GPU //modules/perception/tool/offline_visualiz ## Run the Tool -Before running the Fusion Obstacle Visualization Tool, you can set up the source data directories and the algorithm module settings in the configuration file: `/apollo/modules/perception/tool/offline_visualizer_tool/conf/offline_sequential_obstacle_perception_test.flag`. +Before running the Fusion Obstacle Visualization Tool, you can set up the source data directories and the algorithm module settings in the configuration file: `/apollo/modules/perception/tool/offline_visualizer_tool/conf/offline_sequential_obstacle_perception_test.flag`. -The default source data directories are `/apollo/data/lidar`and `/apollo/data/radar` for `lidar_path` and `radar_path`, respectively. +The default source data directories are `/apollo/data/lidar`and `/apollo/data/radar` for `lidar_path` and `radar_path`, respectively. The visualization-enabling Boolean flag is `true`, and the obstacle result type to be shown is `fused` (the fusion obstacle results based on both LiDAR and RADAR sensors) by default. You can change `fused` to `lidar` or `radar` to visualize the pure obstacle results generated by the single-sensor-based obstacle perception. @@ -113,10 +113,10 @@ Run the Fusion Obstacle Visualization Tool using this command: You see results such as: -- A pop-up window showing the perception result with the point cloud, frame-by-frame -- The raw point cloud shown in grey +- A pop-up window showing the perception result with the point cloud, frame-by-frame +- The raw point cloud shown in grey - Bounding boxes (with red arrows that indicate the headings) that have detected: - Cars (green) - Pedestrians (pink) - Cyclists (blue) - - Unknown elements (purple) + - Unknown elements (purple) diff --git a/docs/howto/how_to_run_offline_sequential_obstacle_perception_visualizer_cn.md b/docs/howto/how_to_run_offline_sequential_obstacle_perception_visualizer_cn.md index ee2dca0fb94..38259915972 100644 --- a/docs/howto/how_to_run_offline_sequential_obstacle_perception_visualizer_cn.md +++ b/docs/howto/how_to_run_offline_sequential_obstacle_perception_visualizer_cn.md @@ -1,10 +1,10 @@ # 如何运行融合障碍可视化工具 -Apollo创建了LiDAR障碍物可视化工具,这是一种离线可视化工具,用于显示基于LiDAR的障碍物感知结果(请参看 [如何离线运行Perception Visulizer](https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_run_offline_perception_visualizer_cn.md))。但是,该工具缺乏基于雷达的障碍物感知结果和基于其两个传感器的融合结果的可视化能力。 +Apollo创建了LiDAR障碍物可视化工具,这是一种离线可视化工具,用于显示基于LiDAR的障碍物感知结果(请参看 [如何离线运行Perception Visulizer](how_to_run_offline_perception_visualizer_cn.md))。但是,该工具缺乏基于雷达的障碍物感知结果和基于其两个传感器的融合结果的可视化能力。 Apollo开发了第二个可视化工具,即融合障碍可视化工具,以补充LiDAR障碍物可视化工具。融合障碍可视化工具显示了这些模块的障碍感知结果: -- 基于LiDAR的算法模块 +- 基于LiDAR的算法模块 - 基于雷达的算法模块 - 融合算法模块,用于调试和测试完整的障碍物感知算法 @@ -52,9 +52,9 @@ bazel build //modules/perception/tool/export_sensor_data:export_sensor_data /apollo/bazel-bin/modules/perception/tool/export_sensor_data/export_sensor_data ``` -3. 运行ROS bag. +3. 运行ROS bag. -​ ROS bag的默认目录是`/apollo/data/bag`。 +​ ROS bag的默认目录是`/apollo/data/bag`。 ​ 下面的例子展示了文件名为`example.bag`的ROS bag. ​ 使用下面的命令: @@ -113,10 +113,10 @@ visualization-enabling布尔标志为`true`,默认情况下,要显示的障 您可以看到如下的结果: -- 一个弹出窗口,逐帧显示点云的感知结果 +- 一个弹出窗口,逐帧显示点云的感知结果 - 原点云以灰色显示 - 已检测到的边界框(带有指示标题的红色箭头): - 车辆 (绿色) - 行人 (粉色) - 自行车 (蓝色) - - 无法识别的元素 (紫色) + - 无法识别的元素 (紫色) diff --git a/docs/howto/how_to_run_perception_module_on_your_local_computer.md b/docs/howto/how_to_run_perception_module_on_your_local_computer.md index 7c7403bbd94..ba4671fbae4 100644 --- a/docs/howto/how_to_run_perception_module_on_your_local_computer.md +++ b/docs/howto/how_to_run_perception_module_on_your_local_computer.md @@ -3,7 +3,7 @@ The perception module requires Nvidia GPU and CUDA installed to run the perception algorithms with Caffe. We have already installed the CUDA and Caffe libraries in the released docker. However, the Nvidia GPU driver is not installed in the released dev docker image. To run the perception module with CUDA acceleration, we suggest to install the exactly same version of Nvidia driver in the docker as the one installed in your host machine, and build Apollo with GPU option. We provide a step-by-step instruction on running perception module with Nvidia GPU as below: -1. Get into the docker container via: +1. Get into the docker container via: ```bash $APOLLO_HOME/docker/scripts/dev_start.sh $APOLLO_HOME/docker/scripts/dev_into.sh @@ -32,7 +32,7 @@ http://localhost:8888/ 8. Launch the perception modules - - If you want to launch all modules + - If you want to launch all modules ``` cyber_launch start /apollo/modules/perception/production/launch/perception_all.launch ``` @@ -41,7 +41,7 @@ http://localhost:8888/ ``` cyber_launch start /apollo/modules/perception/production/launch/perception_camera.launch ``` - + If you want to visualize camera-based results overlaid on the captured image and in bird view, mark `enable_visualization: true` in `‘modules/perception/production/conf/perception/camera/fusion_camera_detection_component.pb.txt` befor executing the above command. It will pop up when you play recorded data in point 9 Also, If you want to enable CIPO, add ‘enable_cipv: true’ as a new line in the same file @@ -62,4 +62,4 @@ http://localhost:8888/ cyber_recorder play -f /apollo/data/bag/anybag -r 0.2 ``` -Please note that the Nvidia driver should be installed appropriately even if the perception module is running in Caffe CPU_ONLY mode (i.e., using `./apollo.sh build` or `./apollo.sh build_opt` to build the perception module). Please see the detailed instruction of perception module in [the perception README](https://github.com/ApolloAuto/apollo/blob/master/modules/perception/README.md). +Please note that the Nvidia driver should be installed appropriately even if the perception module is running in Caffe CPU_ONLY mode (i.e., using `./apollo.sh build` or `./apollo.sh build_opt` to build the perception module). Please see the detailed instruction of perception module in [the perception README](../../modules/perception/README.md). diff --git a/docs/howto/how_to_setup_dual_ipc.md b/docs/howto/how_to_setup_dual_ipc.md index d55508b559e..9a0a03efa29 100644 --- a/docs/howto/how_to_setup_dual_ipc.md +++ b/docs/howto/how_to_setup_dual_ipc.md @@ -1,12 +1,12 @@ -# How to set up Apollo 3.5's software on Dual-IPCs +# How to set up Apollo 3.5's software on Dual-IPCs The modules of Apollo 3.5 are separately launched from two industrial PCs. This guide introduces the hardware/software setup on two parallel IPCs. ## Software - Apollo 3.5 - - Linux precision time protocol - + - Linux precision time protocol + ## Runtime Framework - CyberRT @@ -18,20 +18,20 @@ There are two steps in the installation process: ### Clone and install linux PTP Install PTP utility and synchorize the system time on both of IPCs. - + ```sh git clone https://github.com/richardcochran/linuxptp.git cd linuxptp make - + # on IPC1: sudo ./ptp4l -i eth0 -m & - + # on IPC2: sudo ./ptp4l -i eth0 -m -s & sudo ./phc2sys -a -r & ``` - + ### Clone Apollo 3.5 Install Apollo 3.5 on local ubuntu machine ```sh @@ -39,7 +39,7 @@ There are two steps in the installation process: ``` ### Build Docker environment - Refer to the [How to build and release docker](https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_build_and_release.md) guide + Refer to the [How to build docker environment](../quickstart/apollo_software_installation_guide.md) ### Run CyberRT on both of IPCs 1. Change directory to apollo @@ -69,18 +69,18 @@ There are two steps in the installation process: 7. Open Chrome and go to localhost:8888 to access Apollo Dreamview: - - on IPC1 - + - on IPC1 + - - The header has 3 drop-downs, mode selector, vehicle selector and map selector. + + The header has 3 drop-downs, mode selector, vehicle selector and map selector. ![IPC1 Task](images/IPC1_dv.png) - Select mode, for example "ipc1 Mkz Standard Debug" + Select mode, for example "ipc1 Mkz Standard Debug" - ![IPC1 mode](images/IPC1_mode.png) + ![IPC1 mode](images/IPC1_mode.png) @@ -90,30 +90,30 @@ There are two steps in the installation process: - Select map, for example "Sunnyvale Big Loop" + Select map, for example "Sunnyvale Big Loop" - ![IPC1 map](images/IPC1_map.png) + ![IPC1 map](images/IPC1_map.png) - All the tasks that you could perform in DreamView, in general, setup button turns on all the modules. + All the tasks that you could perform in DreamView, in general, setup button turns on all the modules. ![IPC1 setup](images/IPC1_setup.png) - + All the hardware components should be connected to IPC1 and the modules, localization, perception, routing, recorder, traffic light and transform, are allocated on IPC1 also. - Module Control on sidebar panel is used to check the modules on IPC1 + Module Control on sidebar panel is used to check the modules on IPC1 - ![IPC1 check](images/IPC1_check.png) + ![IPC1 check](images/IPC1_check.png) In order to open dreamview on IPC2, user must stop it on IPC1 by using the below command: ```sh # Stop dreamview on IPC1 bash scripts/bootstrap.sh stop ``` - - - on IPC2 + + - on IPC2 Change CYBER_IP in cyber/setup.bash to IPC2's ip address: ```sh source cyber/setup.bash @@ -132,19 +132,19 @@ There are two steps in the installation process: The modules - planning, prediction and control are assigned on IPC2. - Module Control on sidebar panel is used to check the modules on IPC2 + Module Control on sidebar panel is used to check the modules on IPC2 + + ![IPC2 modules](images/IPC2_check.png) - ![IPC2 modules](images/IPC2_check.png) - - [See Dreamview user's guide](https://github.com/ApolloAuto/apollo/blob/master/docs/specs/dreamview_usage_table.md) + [See Dreamview user's guide](../specs/dreamview_usage_table.md) 8. How to start/stop Dreamview: - The current version of Dreamview shouldn't run on the different IPCs simultaneously, so the user must perform it alternatively on IPC1 or IPC2. - + The current version of Dreamview shouldn't run on the different IPCs simultaneously, so the user must perform it alternatively on IPC1 or IPC2. + The code below can be used to stop Dreamview on IPC2 and start it on IPC1. - + ```sh # Stop Dreamview on IPC2 bash scripts/bootstrap.sh stop @@ -152,22 +152,22 @@ There are two steps in the installation process: # Start Dreamview on IPC1 bash scripts/bootstrap.sh ``` - + 9. Cyber monitor - Cyber monitor is CyberRT's tool used to check the status of all of the modules on local and remote machines. The User may observe the activity status of all the hardware and software components and ensure that they are working correctly. - + Cyber monitor is CyberRT's tool used to check the status of all of the modules on local and remote machines. The User may observe the activity status of all the hardware and software components and ensure that they are working correctly. + ## Future work (To Do) - Multiple Dreamviews may run simultaneously - Fix a bug that modules are still greyed-out after clicking the setup button. Users may check each modules' status by using the command ```sh ps aux | grep mainboard ``` - + # License - [Apache license](https://github.com/ApolloAuto/apollo/blob/master/LICENSE) + [Apache license](../../LICENSE) diff --git a/docs/howto/how_to_tune_control_parameters.md b/docs/howto/how_to_tune_control_parameters.md index e4043bdd032..631481f67e4 100644 --- a/docs/howto/how_to_tune_control_parameters.md +++ b/docs/howto/how_to_tune_control_parameters.md @@ -96,7 +96,7 @@ lat_controller_conf { ### Longitudinal Controller Tuning The longitudinal controller is composed of Cascaded PID controllers that include one station controller and a high/low speed controller with different gains for different speeds. Apollo manages tuning in open loop and closed loop by: -- OpenLoop: Calibration table generation. Please refer to [how_to_update_vehicle_calibration.md](https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_update_vehicle_calibration.md) for detailed steps. +- OpenLoop: Calibration table generation. Please refer to [how_to_update_vehicle_calibration.md](how_to_update_vehicle_calibration.md) for detailed steps. - Closeloop: Based on the order of High Speed Controller -> Low Speed Controller -> Station Controller. #### High/Low Speed Controller Tuning diff --git a/docs/howto/how_to_tune_control_parameters_cn.md b/docs/howto/how_to_tune_control_parameters_cn.md index 008f1c83ac4..246df1641c3 100644 --- a/docs/howto/how_to_tune_control_parameters_cn.md +++ b/docs/howto/how_to_tune_control_parameters_cn.md @@ -22,7 +22,7 @@ #### 横向控制器 横向控制器是基于LQR的最优控制器。 该控制器的动力学模型是一个简单的带有侧滑的自行车模型。它被分为两类,包括闭环和开环。 -- 闭环提供具有4种状态的离散反馈LQR控制器: +- 闭环提供具有4种状态的离散反馈LQR控制器: - 横向误差 - 横向误差率 - 航向误差 @@ -95,7 +95,7 @@ lat_controller_conf { ### 纵控制器的调谐 纵向控制器由级联的PID控制器组成,该控制器包括一个站控制器和一个具有不同速度增益的高速/低速控制器。Apollo管理开环和闭环的调谐通过: -- 开环: 校准表生成。请参阅[how_to_update_vehicle_calibration.md](https://github.com/ApolloAuto/apollo/blob/master/docs/howto/how_to_update_vehicle_calibration.md)的详细步骤 +- 开环: 校准表生成。请参阅[how_to_update_vehicle_calibration.md](how_to_update_vehicle_calibration.md)的详细步骤 - 闭环: 基于高速控制器->低速控制器->站控制器的顺序。 #### 高/低速控制器的调谐 diff --git a/docs/howto/how_to_use_apollo_2.5_navigation_mode.md b/docs/howto/how_to_use_apollo_2.5_navigation_mode.md index 2cb32f4fb76..90cb48a6090 100644 --- a/docs/howto/how_to_use_apollo_2.5_navigation_mode.md +++ b/docs/howto/how_to_use_apollo_2.5_navigation_mode.md @@ -6,35 +6,35 @@ Apollo is well received and highly commended by developers in the field of auton Relative map is the newest feature to be introduced in Apollo 2.5. From the architectural level, the relative map module is the middle layer linking the HDMap to the Perception module and the Planning module as seen in the image below. The relative map module generates real-time maps based on the vehicle’s coordinate system (the format is in the same format as HDMaps). The module also outputs reference lines for the Planning module to use. From the angle of developers, a navigation mode based on relative maps enables developers to implement real-vehicle road tests. As a result, barriers to development have been significantly reduced. -![Software OverView](https://github.com/ApolloAuto/apollo/blob/master/docs/demo_guide/images/Software_Overview.png) +![Software OverView](../demo_guide/images/Software_Overview.png) -The basic idea behind the navigation mode is: +The basic idea behind the navigation mode is: * Record the driving path of a manually driven vehicle on a desired path * Use Apollo tools to process the original path and obtain a smoothed out path (navigation line). This path is then used to * Replace the global route given by the routing module - * Serve as the reference line for the planning modulefor generating the relative map. + * Serve as the reference line for the planning modulefor generating the relative map. * In addition, the path can also be used in combination with the HDMap to replace the lane reference line in the HDMap (by default, the HDMap uses the lane centerline as the reference. However, this method may not suit certain circumstances where using the vehicle's actual navigation line instead could be a more effective solution). -* A driver drives the vehicle to the starting point of a desired path, then selects the navigation mode and enables relevant modules in Dreamview. After the above configuration, the vehicle needs to be switched to autonomous driving status and run in this status. -* While travelling in the autonomous mode, the perception module’s camera will dynamically detect obstacles and road boundaries, while the map module’s relative map sub-module generates a relative map in real time (using a relative coordinate system with the current position of the vehicle as the origin), based on the recorded path (navigation line) and the road boundaries. With the relative map created by the map module and obstacle information created by the perception module, the planning module will dynamically output a local driving path to the control module for execution. +* A driver drives the vehicle to the starting point of a desired path, then selects the navigation mode and enables relevant modules in Dreamview. After the above configuration, the vehicle needs to be switched to autonomous driving status and run in this status. +* While travelling in the autonomous mode, the perception module’s camera will dynamically detect obstacles and road boundaries, while the map module’s relative map sub-module generates a relative map in real time (using a relative coordinate system with the current position of the vehicle as the origin), based on the recorded path (navigation line) and the road boundaries. With the relative map created by the map module and obstacle information created by the perception module, the planning module will dynamically output a local driving path to the control module for execution. * At present, the navigation mode only supports single-lane driving. It can perform tasks such as acceleration and deceleration, car following, slowing down and stopping before obstacles, or nudge obstacles within the lane width. Subsequent versions will see further improvements to support multi-lane driving and traffic lights/signs detection. This article fully explains the build of Apollo 2.5, navigation line data collection and production, front-end compilation and configuration of Dreamview, and navigation mode usage, etc. Hopefully this will bring convenience to developers when properly using Apollo 2.5. ## 1. Building the Apollo 2.5 environment -First, download the Apollo 2.5 source code from GitHub website. This can be done by either using git command or getting the compressed package directly from the web page. There are two options to build Apollo after downloading the source code to an appropriate directory: 1. in Visual Studio Code (recommended); 2. by using the command line. Of course, the common prerequisite is that Docker has already been successfully installed on your computer. You can use the script file [`install_docker.sh`](https://github.com/ApolloAuto/apollo/blob/master/docker/scripts/install_docker.sh) to install Docker firstly. +First, download the Apollo 2.5 source code from GitHub website. This can be done by either using git command or getting the compressed package directly from the web page. There are two options to build Apollo after downloading the source code to an appropriate directory: 1. in Visual Studio Code (recommended); 2. by using the command line. Of course, the common prerequisite is that Docker has already been successfully installed on your computer. You can use the script file [`install_docker.sh`](../../docker/scripts/install_docker.sh) to install Docker firstly. ### 1.1 Build with the Visual Studio Code Open Visual Studio Code and execute menu command `File -> Open Folder`. In the pop-up dialog, select a Apollo project source folder and click `OK`, as shown in the following figure: -![img](images/navigation_mode/open_directory_en.png) +![img](images/navigation_mode/open_directory_en.png) -![img](images/navigation_mode/choose_apollo_directory_en.png) +![img](images/navigation_mode/choose_apollo_directory_en.png) Next, execute menu command `Tasks -> Run Build Task` or directly press `Ctrl + Shift + B` (shortcut keys which are the same as in Visual Studio and QT) to build a project. Docker will be launched when compiling if it has not yet been started. A superuser password needs to be entered in the terminal window at the bottom. After the command is executed, a display of `Terminal will be reused by tasks, press any key to close it.` in the terminal window at the bottom indicates that the build is successful. Keep good internet connection during the whole process, otherwise the dependencies cannot be downloaded. You may encounter some problems during the build. Solutions can be found in [this blog](https://blog.csdn.net/davidhopper/article/details/79349927) post and the [Help Doc](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/howto/how_to_build_and_debug_apollo_in_vscode_cn.md) on GitHub. -![img](images/navigation_mode/build_successfully_en.png) +![img](images/navigation_mode/build_successfully_en.png) ### 1.2 Build in a terminal @@ -67,29 +67,29 @@ We are using the UTM area ID of Changsha area, for UTM sub-areas in China, pleas --local_utm_zone_id=49 ``` -`Note:` If the coordinates were not changed before recording data, they must not be changed when playing back data during offline testing. Changing the ID after recording will disturb navigation line locating! +`Note:` If the coordinates were not changed before recording data, they must not be changed when playing back data during offline testing. Changing the ID after recording will disturb navigation line locating! ### 1.4 Configuring your UTM area ID for Dreamview -Open` [your_apollo_root_dir]/modules/common/data/global_flagfile.txt`, add this line at the bottom (we are using the UTM area ID of Changsha area, for UTM sub-areas in China, please go to [this page](http://www.360doc.com/content/14/0729/10/3046928_397828751.shtml)): +Open` [your_apollo_root_dir]/modules/common/data/global_flagfile.txt`, add this line at the bottom (we are using the UTM area ID of Changsha area, for UTM sub-areas in China, please go to [this page](http://www.360doc.com/content/14/0729/10/3046928_397828751.shtml)): ``` --local_utm_zone_id=49 ``` ## 2. Collect navigation line raw data -Import the pre-specified Apollo file into the in-car IPC, enter Docker (follow steps in 1.2), and execute the following command to launch Dreamview: +Import the pre-specified Apollo file into the in-car IPC, enter Docker (follow steps in 1.2), and execute the following command to launch Dreamview: ``` bash bash scripts/bootstrap.sh ``` Open [http://localhost:8888](http://localhost:8888) in a Chrome or Firfox browser (do not use proxy), and enter the Dreamview interface: -![img](images/navigation_mode/dreamview_interface.png) +![img](images/navigation_mode/dreamview_interface.png) * The driver controls the vehicle and parks at the starting location of the road test; * The operator clicks the `Module Controller` button in the toolbar from the left side of the Dreamview interface. In the `Module Controller` page, select `GPS`, `Localization`, and `Record Bag`. Note: If the recorded data bag will be used in offline test, also select `CAN Bus`. * The driver starts the engine and drives to the end location as planned; -* The operator unselects the `Record Bag` option in the Dreamview interface, and a directory such as `2018-04-01-09-58-0`0 will be generated in the`/apollo/data/bag` directory (in Docker, an associative directory will be created on the dedicated host` [your your_apollo_root_dir]/data/bag`). The data bag (i.e. `2018-04-01-09-58-00.bag`) will be kept there. Take note of its path and filename as it will be needed next. +* The operator unselects the `Record Bag` option in the Dreamview interface, and a directory such as `2018-04-01-09-58-0`0 will be generated in the`/apollo/data/bag` directory (in Docker, an associative directory will be created on the dedicated host` [your your_apollo_root_dir]/data/bag`). The data bag (i.e. `2018-04-01-09-58-00.bag`) will be kept there. Take note of its path and filename as it will be needed next. `Note:` the default recording time in a bag is 1 minute, and the default size of a bag is 2048 MB, which can be edited in `/apollo/scripts/record_bag.sh`. For convenience, the next steps assume the `2018-04-01-09-58-00.bag` is in the `/apollo/data/bag` directory in this article. @@ -111,7 +111,7 @@ python viewer_raw.py ./path_2018-04-01-09-58-00.bag.txt ``` And a figure like the image below, will be generated: -![img](images/navigation_mode/view_raw_data.png) +![img](images/navigation_mode/view_raw_data.png) ### 3.2 Smoothen the raw data @@ -126,15 +126,15 @@ python viewer_smooth.py ./path_2018-04-01-09-58-00.bag.txt ./path_2018-04-01-09- ``` The first argument `./path_2018-04-01-09-58-00.bag.txt` is raw data, the second argument `./path_2018-04-01-09-58-00.bag.txt.smoothed` is the smoothed result. A figure like below will be generated: -![img](images/navigation_mode/view_smoothing_results.png) +![img](images/navigation_mode/view_smoothing_results.png) ## 4. Dreamview frontend compilation and configuration -Dreamview frontend uses the Baidu Map by default. It can be changed to Google Maps by re-compiling frontend, as seen in the sub-sections below (Note: if you wish to continue with the default Map application, please ignore the sub-sections below): +Dreamview frontend uses the Baidu Map by default. It can be changed to Google Maps by re-compiling frontend, as seen in the sub-sections below (Note: if you wish to continue with the default Map application, please ignore the sub-sections below): ### 4.1 Change navigation map settings -Open the file`[your_apollo_root_dir]/modules/dreamview/frontend/src/store/config/ parameters.yml`, change the map settings to meet your needs: +Open the file`[your_apollo_root_dir]/modules/dreamview/frontend/src/store/config/ parameters.yml`, change the map settings to meet your needs: ``` bash navigation: @@ -163,8 +163,8 @@ ERROR in ../~/css-loader!../~/sass-loader/lib/loader.js?{"includePaths":["./node ... (The error continues, but we have the information we need to debug it) ``` -This is because of built-in dependent package inconsistency, which can be resolved by executing the following command in Docker: -(`Note:` keep your internet connection steady, or you might not be able to download the dependent package again): +This is because of built-in dependent package inconsistency, which can be resolved by executing the following command in Docker: +(`Note:` keep your internet connection steady, or you might not be able to download the dependent package again): ``` bash cd /apollo/modules/dreamview/frontend/ rm -rf node_modules @@ -187,24 +187,24 @@ bash docker/scripts/dev_into.sh # Start Dreamview and monitoring process bash scripts/bootstrap.sh ``` -For offline mock tests, loop the data bag recorded in Step 2 `/apollo/data/bag/2018-04-01-09-58-00.bag` (data recorded in my device). Please ignore this step for real vehicle commissioning. +For offline mock tests, loop the data bag recorded in Step 2 `/apollo/data/bag/2018-04-01-09-58-00.bag` (data recorded in my device). Please ignore this step for real vehicle commissioning. ``` bash -# For offline mock tests, loop the data bag recorded in step 2. Please ignore this step for real vehicle commissioning. +# For offline mock tests, loop the data bag recorded in step 2. Please ignore this step for real vehicle commissioning. rosbag play -l /apollo/data/bag/2018-04-01-09-58-00.bag ``` Open this website[ http://localhost:8888]( http://localhost:8888) in the browser (DO NOT use proxy), enter Dreamview interface, click on the dropdown box in the upper right, and select `Navigation` mode, as shown in the screenshot below: -![img](images/navigation_mode/enable_navigation_mode.png) +![img](images/navigation_mode/enable_navigation_mode.png) ### 5.2 Enable relevant modules in the navigation mode Click on the `Module Controller` button in the toolbar on the left side of the Dreamview interface and enter the module controller page. For offline mock tests, select `Relative Map`, `Navi Planning`, and other modules as needed as shown in the screenshot below (the module that shows blank text is the Mobileye module, which will be visible only if the related hardware is installed and configured): -![img](images/navigation_mode/test_in_navigation_mode.png) +![img](images/navigation_mode/test_in_navigation_mode.png) For real vehicle commissioning, all modules except `Record Bag`, `Mobileye`(if Mobileye hardware has not been installed, it will be shown as blank text) and `Third Party Perception` should be activated, as displayed in the next screenshot: -![img](images/navigation_mode/drive_car_in_navigation_mode.png) +![img](images/navigation_mode/drive_car_in_navigation_mode.png) ### 5.3 Send the navigation line data @@ -215,11 +215,11 @@ python navigator.py ./path_2018-04-01-09-58-00.bag.txt.smoothed ``` The following screenshot shows the interface after Dreamview receives navigation line data during offline mock testing. You can see the Baidu Map interface in the upper left corner. Our navigation line is shown as red lines in the Baidu Map, and white lines in the main interface. -![img](images/navigation_mode/navigation_mode_with_reference_line_test.png) +![img](images/navigation_mode/navigation_mode_with_reference_line_test.png) The next screenshot shows the interface after Dreamview receives navigation line data during real vehicle commissioning. You can see the Baidu Map interface in the upper left corner. Our navigation line is shown as red lines in Baidu Map, and yellow lines in the main interface. -![img](images/navigation_mode/navigation_mode_with_reference_line_car.png) +![img](images/navigation_mode/navigation_mode_with_reference_line_car.png) A few tips to focus on: * If the navigation line is not displayed correctly in Dreamview interface, the reasons could be: @@ -230,6 +230,6 @@ A few tips to focus on: # Stop Dreamview and monitoring process bash scripts/bootstrap.sh stop # Restart Dreamview and monitoring process - bash scripts/bootstrap.sh + bash scripts/bootstrap.sh ``` -* Every time the vehicle returns to the starting point, it is necessary to send the navigation line data again, whether it is the offline mock test or the vehicle commissioning. \ No newline at end of file +* Every time the vehicle returns to the starting point, it is necessary to send the navigation line data again, whether it is the offline mock test or the vehicle commissioning. diff --git a/docs/howto/how_to_use_apollo_2.5_navigation_mode_cn.md b/docs/howto/how_to_use_apollo_2.5_navigation_mode_cn.md index 12c658d93d6..379aa52a33c 100644 --- a/docs/howto/how_to_use_apollo_2.5_navigation_mode_cn.md +++ b/docs/howto/how_to_use_apollo_2.5_navigation_mode_cn.md @@ -3,7 +3,7 @@ `Apollo`项目以其优异的系统架构、完整的模块功能、良好的开源生态及规范的代码风格,受到众多开发者的喜爱和好评。不过在`Apollo`之前的版本中,感知、预测、导航、规划模块均依赖于高精地图,而高精地图的制作方法繁琐且不透明,对于很多开发者而言,这是一个难以逾越的障碍。因为没有高精地图,很多人只能使用`Apollo`提供的模拟数据包进行走马观花式的观赏,而无法在测试道路上完成真枪实弹式的实车调试,这极大降低了`Apollo`项目带来的便利,也不利于自动驾驶开源社区的发展和壮大。显然,`Apollo`项目组已注意到该问题,经过他们几个月的艰苦努力,终于在2.5版开发了一种新的基于相对地图(`relative map`)的导航模式(`navigation mode`),利用该模式可顺利实施测试道路上的实车调试。 -相对地图是Apollo2.5引入的新特性。从架构层面,相对地图模块是连接高精地图(`HD Map`)、感知(`Perception`)模块和规划(`Planning`)模块的中间层。相对地图模块会实时生成基于车身坐标系的地图(格式与高精地图一致),并且输出供规划模块使用的参考线。更多信息,可以参考[相对地图的说明文档](https://github.com/ApolloAuto/apollo/blob/master/modules/map/relative_map/README.md)。从开发者友好性角度看,基于相对地图的导航模式,让开发者可以不依赖高精地图便可实施测试道路的实车调试,极大降低了开发者的使用门槛。 +相对地图是Apollo2.5引入的新特性。从架构层面,相对地图模块是连接高精地图(`HD Map`)、感知(`Perception`)模块和规划(`Planning`)模块的中间层。相对地图模块会实时生成基于车身坐标系的地图(格式与高精地图一致),并且输出供规划模块使用的参考线。更多信息,可以参考[相对地图的说明文档](../../modules/map/relative_map/README.md)。从开发者友好性角度看,基于相对地图的导航模式,让开发者可以不依赖高精地图便可实施测试道路的实车调试,极大降低了开发者的使用门槛。 导航模式的基本思路是: @@ -17,19 +17,19 @@ ## 一、Apollo 2.5版的构建 -首先从[GitHub网站](https://github.com/ApolloAuto/apollo)下载`Apollo2.5`版源代码,可以使用`git`命令下载,也可以直接通过网页下载压缩包。源代码下载完成并放置到合适的目录后,可以使用两种方法构建:1.在`Visual Studio Code`中构建(推荐);2.使用命令行构建。当然,两种方法都有一个前提,就是在你的机器上已经顺利安装了`Docker`。你可以使用`Apollo`提供的脚本文件[`install_docker.sh`](https://github.com/ApolloAuto/apollo/blob/master/docker/scripts/install_docker.sh)安装`Docker`。 +首先从[GitHub网站](https://github.com/ApolloAuto/apollo)下载`Apollo2.5`版源代码,可以使用`git`命令下载,也可以直接通过网页下载压缩包。源代码下载完成并放置到合适的目录后,可以使用两种方法构建:1.在`Visual Studio Code`中构建(推荐);2.使用命令行构建。当然,两种方法都有一个前提,就是在你的机器上已经顺利安装了`Docker`。你可以使用`Apollo`提供的脚本文件[`install_docker.sh`](../../docker/scripts/install_docker.sh)安装`Docker`。 ### 1.1 在Visual Studio Code中构建 打开`Visual Studio Code`,执行菜单命令`文件->打开文件夹`,在弹出的对话框中,选择`Apollo项目`源文件夹,点击“确定”,如下图所示: -![img](images/navigation_mode/open_directory.png) +![img](images/navigation_mode/open_directory.png) -![img](images/navigation_mode/choose_apollo_directory.png) +![img](images/navigation_mode/choose_apollo_directory.png) 之后,执行菜单命令`任务->运行生成任务`或直接按快捷键`Ctrl+Shift+B`(与`Visual Studio`和`QT`的快捷键一致)构建工程,若之前没有启动过`Docker`,则编译时会启动`Docker`,需在底部终端窗口输入超级用户密码。命令执行完毕,若在底部终端窗口出现`终端将被任务重用,按任意键关闭。`信息(如下图所示),则表示构建成功。整个过程**一定要保持网络畅通**,否则无法下载依赖包。构建过程可能会遇到一些问题,解决方法可参见我写的一篇[博客](https://blog.csdn.net/davidhopper/article/details/79349927) ,也可直接查看`GitHub`网站的[帮助文档](https://github.com/ApolloAuto/apollo/blob/r5.5.0/docs/howto/how_to_build_and_debug_apollo_in_vscode_cn.md)。 -![img](images/navigation_mode/build_successfully.png) +![img](images/navigation_mode/build_successfully.png) ### 1.2 在命令行中构建 @@ -78,13 +78,13 @@ bash scripts/bootstrap.sh ``` 在浏览器中打开网页[http://localhost:8888](http://localhost:8888)(注意不要使用代理),进入`Dreamview`界面,如下图所示: -![img](images/navigation_mode/dreamview_interface.png) +![img](images/navigation_mode/dreamview_interface.png) **1** 驾驶员将车辆驶入待测试路段起点; **2** 操作员点击`Dreamview`界面左侧工具栏中的`Module Controller`按钮,进入模块控制页面,选中`GPS`、`Localization`、`Record Bag`选项,**注意:如果采集的数据包需用于线下模拟测试,还需加上`CAN Bus`选项。** -![img](images/navigation_mode/options_for_data_recording.png) +![img](images/navigation_mode/options_for_data_recording.png) **3** 驾驶员从起点启动车辆并按预定路线行驶至终点; @@ -111,7 +111,7 @@ python viewer_raw.py ./path_2018-04-01-09-58-00.bag.txt ``` 会显示类似下图的路径图: -![img](images/navigation_mode/view_raw_data.png) +![img](images/navigation_mode/view_raw_data.png) ### 3.2 对裸数据进行平滑处理 @@ -127,7 +127,7 @@ python viewer_smooth.py ./path_2018-04-01-09-58-00.bag.txt ./path_2018-04-01-09- ``` 其中,第一个参数`./path_2018-04-01-09-58-00.bag.txt`是裸数据,第二个参数`./path_2018-04-01-09-58-00.bag.txt.smoothed`是平滑结果,显示效果类似下图: -![img](images/navigation_mode/view_smoothing_results.png) +![img](images/navigation_mode/view_smoothing_results.png) ## 四、Dreamview前端的编译 @@ -135,7 +135,7 @@ python viewer_smooth.py ./path_2018-04-01-09-58-00.bag.txt ./path_2018-04-01-09- ### 4.1 更改导航地图 -打开文件`[apollo项目根目录]/modules/dreamview/frontend/src/store/config/ parameters.yml`,根据需要将下述内容替换为`Google`地图或`Baidu`地图: +打开文件`[apollo项目根目录]/modules/dreamview/frontend/src/store/config/ parameters.yml`,根据需要将下述内容替换为`Google`地图或`Baidu`地图: ``` bash navigation: # possible options: BaiduMap or GoogleMap @@ -192,19 +192,19 @@ bash scripts/bootstrap.sh # 模拟测试情形下,循环播放录制数据;实车调试情形忽略该步骤 rosbag play -l /apollo/data/bag/2018-04-01-09-58-00.bag ``` -在浏览器中打开网页[http://localhost:8888](http://localhost:8888)(注意不要使用代理),进入`Dreamview`界面,点击右上方下拉框,将模式设置为`Navigation`(导航模式),如下图所示: +在浏览器中打开网页[http://localhost:8888](http://localhost:8888)(注意不要使用代理),进入`Dreamview`界面,点击右上方下拉框,将模式设置为`Navigation`(导航模式),如下图所示: -![img](images/navigation_mode/enable_navigation_mode.png) +![img](images/navigation_mode/enable_navigation_mode.png) ### 5.2 启用导航模式下的相关功能模块 点击`Dreamview`界面左侧工具栏中的`Module Controller`按钮,进入模块控制页面。**若是线下模拟测试**,选中`Relative Map`、`Navi Planning`选项,其他模块根据需要开启,如下图所示(图中显示空白文本的模块是`Mobileye`模块,需安装配置好相关硬件后才可见)): -![img](images/navigation_mode/test_in_navigation_mode.png) +![img](images/navigation_mode/test_in_navigation_mode.png) **若是实车调试**,建议除`Record Bag`、`Mobileye`(若`Mobileye`硬件未安装,则会显示为空白文本)和`Third Party Perception`模块外,其余模块全部开启,如下图所示: -![img](images/navigation_mode/drive_car_in_navigation_mode.png) +![img](images/navigation_mode/drive_car_in_navigation_mode.png) ### 5.3 发送参考线数据 @@ -215,11 +215,11 @@ python navigator.py ./path_2018-04-01-09-58-00.bag.txt.smoothed ``` 下图是**线下模拟测试情形下**`Dreamview`接收到参考线后的界面,注意界面左上角已出现了百度地图界面,我们发送的参考线在百度地图中以红线方式、在主界面中以白色车道线的方式展现。 -![img](images/navigation_mode/navigation_mode_with_reference_line_test.png) +![img](images/navigation_mode/navigation_mode_with_reference_line_test.png) 下图是**实车调试情形下的**`Dreamview`接收到参考线后的界面,注意界面左上角已出现了百度地图界面,我们发送的参考线在百度地图中以红线方式、在主界面中以黄色车道线的方式展现。 -![img](images/navigation_mode/navigation_mode_with_reference_line_car.png) +![img](images/navigation_mode/navigation_mode_with_reference_line_car.png) 需注意以下几点: @@ -228,6 +228,6 @@ python navigator.py ./path_2018-04-01-09-58-00.bag.txt.smoothed # 停止Dreamview后台服务 bash scripts/bootstrap.sh stop # 重新启动Dreamview后台服务 -bash scripts/bootstrap.sh +bash scripts/bootstrap.sh ``` -(2) 每次车辆重新回到起点后,无论是线下模拟测试还是实车调试情形,**均需再次发送参考线数据**。 \ No newline at end of file +(2) 每次车辆重新回到起点后,无论是线下模拟测试还是实车调试情形,**均需再次发送参考线数据**。