From c0c6e0dab7ecec0b3723a8b3c1041c422b5f26b0 Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Tue, 2 Apr 2024 11:37:09 +0900 Subject: [PATCH 01/37] doc: update README for pedestrian traffic light recognition Signed-off-by: tzhong518 --- .../README.md | 33 ++++++++++++++++--- .../README.md | 14 ++++++-- .../README.md | 4 ++- ...traffic_light_multi_camera_fusion_node.cpp | 3 ++ .../README.md | 23 +++++++------ 5 files changed, 58 insertions(+), 19 deletions(-) diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index b14fefbd43beb..9d28d1f84a17e 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -2,7 +2,7 @@ ## Purpose -`crosswalk_traffic_light_estimator` is a module that estimates pedestrian traffic signals from HDMap and detected vehicle traffic signals. +`crosswalk_traffic_light_estimator` is a module that estimates pedestrian traffic signals from HDMap and detected traffic signals. ## Inputs / Outputs @@ -25,10 +25,13 @@ | Name | Type | Description | Default value | | :---------------------------- | :------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------ | | `use_last_detect_color` | `bool` | If this parameter is `true`, this module estimates pedestrian's traffic signal as RED not only when vehicle's traffic signal is detected as GREEN/AMBER but also when detection results change GREEN/AMBER to UNKNOWN. (If detection results change RED or AMBER to UNKNOWN, this module estimates pedestrian's traffic signal as UNKNOWN.) If this parameter is `false`, this module use only latest detection results for estimation. (Only when the detection result is GREEN/AMBER, this module estimates pedestrian's traffic signal as RED.) | `true` | -| `last_detect_color_hold_time` | `double` | The time threshold to hold for last detect color. | `2.0` | +| `last_detect_color_hold_time` | `double` | The time threshold to hold for last detected vehicle traffic light color. | `2.0` | +| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedetrian traffic light color. ## Inner-workings / Algorithms - +1. Estimate the color of pedestrian traffic light from HDMap and detected vehicle traffic signals. +2. If pedestrian traffic light recognition is available, determine the final state based on classification result and estimation result. +### Estimation ```plantuml start @@ -58,7 +61,7 @@ end If traffic between pedestrians and vehicles is controlled by traffic signals, the crosswalk traffic signal maybe **RED** in order to prevent pedestrian from crossing when the following conditions are satisfied. -### Situation1 +#### Situation1 - crosswalk conflicts **STRAIGHT** lanelet - the lanelet refers **GREEN** or **AMBER** traffic signal (The following pictures show only **GREEN** case) @@ -70,7 +73,7 @@ If traffic between pedestrians and vehicles is controlled by traffic signals, th -### Situation2 +#### Situation2 - crosswalk conflicts different turn direction lanelets (STRAIGHT and LEFT, LEFT and RIGHT, RIGHT and STRAIGHT) - the lanelets refer **GREEN** or **AMBER** traffic signal (The following pictures show only **GREEN** case) @@ -79,6 +82,26 @@ If traffic between pedestrians and vehicles is controlled by traffic signals, th +### Final state +```plantumul +start +if (the pedestrian traffic light classification result exists)then + : update the flashing flag according to the classification result(in_signal) and last_signals + if (the traffic light is flashing?)then(yes) + : update the traffic light state + else(no) + : the traffic light state is the same with the classification result +if (the classification result not exists) + : the traffic light state is the same with the estimation + : output the current traffic light state +end +``` + +#### Update flashing flag +
+ +
+ ## Assumptions / Known limits ## Future extensions / Unimplemented parts diff --git a/perception/autoware_traffic_light_classifier/README.md b/perception/autoware_traffic_light_classifier/README.md index 6e720aabc7593..3782c5944245e 100644 --- a/perception/autoware_traffic_light_classifier/README.md +++ b/perception/autoware_traffic_light_classifier/README.md @@ -8,15 +8,22 @@ traffic_light_classifier is a package for classifying traffic light labels using ### cnn_classifier -Traffic light labels are classified by EfficientNet-b1 or MobileNet-v2. -Totally 83400 (58600 for training, 14800 for evaluation and 10000 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. +Traffic light labels are classified by EfficientNet-b1 or MobileNet-v2. +We trained classifiers for vehicular signals and pedestrian signals separately. +For vehicular signals, totally 83400 (58600 for training, 14800 for evaluation and 10000 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. The information of the models is listed here: - | Name | Input Size | Test Accuracy | | --------------- | ---------- | ------------- | | EfficientNet-b1 | 128 x 128 | 99.76% | | MobileNet-v2 | 224 x 224 | 99.81% | +For pedestrian signals, totally 21199 (17860 for training, 2114 for evaluation and 1225 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. +The information of the models is listed here: +| Name | Input Size | Test Accuracy | +| --------------- | ---------- | ------------- | +| EfficientNet-b1 | 128 x 128 | 97.89% | +| MobileNet-v2 | 224 x 224 | 99.10% | + ### hsv_classifier Traffic light colors (green, yellow and red) are classified in HSV model. @@ -57,6 +64,7 @@ These colors and shapes are assigned to the message as follows: | `classifier_type` | int | if the value is `1`, cnn_classifier is used | | `data_path` | str | packages data and artifacts directory path | | `backlight_threshold` | float | If the intensity get grater than this overwrite with UNKNOWN in corresponding RoI. Note that, if the value is much higher, the node only overwrites in the harsher backlight situations. Therefore, If you wouldn't like to use this feature set this value to `1.0`. The value can be `[0.0, 1.0]`. The confidence of overwritten signal is set to `0.0`. | +| `classify_traffic_light_type` | int | if the value is `0`, the classifier classifies the vehicular signals. if the value is `1`, it classifies the pedestrian signals. | ### Core Parameters diff --git a/perception/autoware_traffic_light_map_based_detector/README.md b/perception/autoware_traffic_light_map_based_detector/README.md index 8a59db19ae64d..b6ba14df262be 100644 --- a/perception/autoware_traffic_light_map_based_detector/README.md +++ b/perception/autoware_traffic_light_map_based_detector/README.md @@ -9,7 +9,7 @@ Calibration and vibration errors can be entered as parameters, and the size of t ![traffic_light_map_based_detector_result](./docs/traffic_light_map_based_detector_result.svg) If the node receives route information, it only looks at traffic lights on that route. -If the node receives no route information, it looks at a radius of 200 meters and the angle between the traffic light and the camera is less than 40 degrees. +If the node receives no route information, it looks at a radius of max_detection_range and the angle between the traffic light and the camera is less than traffic_light_max_angle_range. ## Input topics @@ -37,6 +37,8 @@ If the node receives no route information, it looks at a radius of 200 meters an | `max_vibration_width` | double | Maximum error in width direction. If -5~+5, it will be 10. | | `max_vibration_depth` | double | Maximum error in depth direction. If -5~+5, it will be 10. | | `max_detection_range` | double | Maximum detection range in meters. Must be positive | +| `car_traffic_light_max_angle_range` | double | Maximum angle between the vehicular traffic light and the camera in degrees. Must be positive +| `pedestrian_traffic_light_max_angle_range` | double | Maximum angle between the pedestrian traffic light and the camera in degrees. Must be positive | | `min_timestamp_offset` | double | Minimum timestamp offset when searching for corresponding tf | | `max_timestamp_offset` | double | Maximum timestamp offset when searching for corresponding tf | | `timestamp_sample_len` | double | sampling length between min_timestamp_offset and max_timestamp_offset | diff --git a/perception/autoware_traffic_light_multi_camera_fusion/src/traffic_light_multi_camera_fusion_node.cpp b/perception/autoware_traffic_light_multi_camera_fusion/src/traffic_light_multi_camera_fusion_node.cpp index 70841b936af37..a1e06bcfb64ac 100644 --- a/perception/autoware_traffic_light_multi_camera_fusion/src/traffic_light_multi_camera_fusion_node.cpp +++ b/perception/autoware_traffic_light_multi_camera_fusion/src/traffic_light_multi_camera_fusion_node.cpp @@ -85,6 +85,9 @@ int compareRecord( int visible_score_1 = calVisibleScore(r1); int visible_score_2 = calVisibleScore(r2); if (visible_score_1 == visible_score_2) { + /* + if the visible scores are the same, the one with higher confidence is of higher priority + */ double confidence_1 = r1.signal.elements[0].confidence; double confidence_2 = r2.signal.elements[0].confidence; return confidence_1 < confidence_2 ? -1 : 1; diff --git a/perception/autoware_traffic_light_occlusion_predictor/README.md b/perception/autoware_traffic_light_occlusion_predictor/README.md index bc57dbea76c97..ce2a103d4715a 100644 --- a/perception/autoware_traffic_light_occlusion_predictor/README.md +++ b/perception/autoware_traffic_light_occlusion_predictor/README.md @@ -8,22 +8,24 @@ For each traffic light roi, hundreds of pixels would be selected and projected i ![image](images/occlusion.png) -If no point cloud is received or all point clouds have very large stamp difference with the camera image, the occlusion ratio of each roi would be set as 0. +If no point cloud is received or all point clouds have very large stamp difference with the camera image, the occlusion ratio of each roi would be set as 0. The signal whose occlusion ratio is larger than max_occlusion_ratio will be set as unknown type. ## Input topics -| Name | Type | Description | -| -------------------- | ---------------------------------------------- | ------------------------ | -| `~input/vector_map` | autoware_map_msgs::msg::LaneletMapBin | vector map | -| `~/input/rois` | autoware_perception_msgs::TrafficLightRoiArray | traffic light detections | -| `~input/camera_info` | sensor_msgs::CameraInfo | target camera parameter | -| `~/input/cloud` | sensor_msgs::PointCloud2 | LiDAR point cloud | +| Name | Type | Description | +| -------------------- | --------------------------------------------------- | ------------------------ | +| `~input/vector_map` | autoware_auto_mapping_msgs::HADMapBin | vector map | +| `~/input/car/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | vehicular traffic light signals | +| `~/input/pedestrian/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | pedestrian traffic light signals | +| `~/input/rois` | autoware_auto_perception_msgs::TrafficLightRoiArray | traffic light detections | +| `~input/camera_info` | sensor_msgs::CameraInfo | target camera parameter | +| `~/input/cloud` | sensor_msgs::PointCloud2 | LiDAR point cloud | ## Output topics -| Name | Type | Description | -| -------------------- | ---------------------------------------------------- | ---------------------------- | -| `~/output/occlusion` | autoware_perception_msgs::TrafficLightOcclusionArray | occlusion ratios of each roi | +| Name | Type | Description | +| -------------------- | --------------------------------------------------------- | ---------------------------- | +| `~/output/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | traffic light signals reset according to the occlusion ratio | ## Node parameters @@ -34,3 +36,4 @@ If no point cloud is received or all point clouds have very large stamp differen | `max_valid_pt_dist` | double | The points within this distance would be used for calculation | | `max_image_cloud_delay` | double | The maximum delay between LiDAR point cloud and camera image | | `max_wait_t` | double | The maximum time waiting for the LiDAR point cloud | +| `max_occlusion_ratio` | int | The maximum occlusion ratio for setting signal as unknown | From cd3c173e3a31c840239aa318320fd12403232bfb Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Tue, 2 Apr 2024 11:48:20 +0900 Subject: [PATCH 02/37] fix: precommit Signed-off-by: tzhong518 --- .../README.md | 11 +++++--- .../README.md | 24 ++++++++--------- .../README.md | 26 +++++++++---------- .../README.md | 22 ++++++++-------- 4 files changed, 44 insertions(+), 39 deletions(-) diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index 9d28d1f84a17e..8dafabea7e844 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -25,13 +25,16 @@ | Name | Type | Description | Default value | | :---------------------------- | :------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------ | | `use_last_detect_color` | `bool` | If this parameter is `true`, this module estimates pedestrian's traffic signal as RED not only when vehicle's traffic signal is detected as GREEN/AMBER but also when detection results change GREEN/AMBER to UNKNOWN. (If detection results change RED or AMBER to UNKNOWN, this module estimates pedestrian's traffic signal as UNKNOWN.) If this parameter is `false`, this module use only latest detection results for estimation. (Only when the detection result is GREEN/AMBER, this module estimates pedestrian's traffic signal as RED.) | `true` | -| `last_detect_color_hold_time` | `double` | The time threshold to hold for last detected vehicle traffic light color. | `2.0` | -| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedetrian traffic light color. +| `last_detect_color_hold_time` | `double` | The time threshold to hold for last detected vehicle traffic light color. | `2.0` | +| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedetrian traffic light color. | ## Inner-workings / Algorithms + 1. Estimate the color of pedestrian traffic light from HDMap and detected vehicle traffic signals. 2. If pedestrian traffic light recognition is available, determine the final state based on classification result and estimation result. + ### Estimation + ```plantuml start @@ -83,13 +86,14 @@ If traffic between pedestrians and vehicles is controlled by traffic signals, th ### Final state + ```plantumul start if (the pedestrian traffic light classification result exists)then : update the flashing flag according to the classification result(in_signal) and last_signals if (the traffic light is flashing?)then(yes) : update the traffic light state - else(no) + else(no) : the traffic light state is the same with the classification result if (the classification result not exists) : the traffic light state is the same with the estimation @@ -98,6 +102,7 @@ end ``` #### Update flashing flag +
diff --git a/perception/autoware_traffic_light_classifier/README.md b/perception/autoware_traffic_light_classifier/README.md index 3782c5944245e..2315ac17364a3 100644 --- a/perception/autoware_traffic_light_classifier/README.md +++ b/perception/autoware_traffic_light_classifier/README.md @@ -12,17 +12,17 @@ Traffic light labels are classified by EfficientNet-b1 or MobileNet-v2. We trained classifiers for vehicular signals and pedestrian signals separately. For vehicular signals, totally 83400 (58600 for training, 14800 for evaluation and 10000 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. The information of the models is listed here: -| Name | Input Size | Test Accuracy | +| Name | Input Size | Test Accuracy | | --------------- | ---------- | ------------- | -| EfficientNet-b1 | 128 x 128 | 99.76% | -| MobileNet-v2 | 224 x 224 | 99.81% | +| EfficientNet-b1 | 128 x 128 | 99.76% | +| MobileNet-v2 | 224 x 224 | 99.81% | For pedestrian signals, totally 21199 (17860 for training, 2114 for evaluation and 1225 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. The information of the models is listed here: -| Name | Input Size | Test Accuracy | +| Name | Input Size | Test Accuracy | | --------------- | ---------- | ------------- | -| EfficientNet-b1 | 128 x 128 | 97.89% | -| MobileNet-v2 | 224 x 224 | 99.10% | +| EfficientNet-b1 | 128 x 128 | 97.89% | +| MobileNet-v2 | 224 x 224 | 99.10% | ### hsv_classifier @@ -59,12 +59,12 @@ These colors and shapes are assigned to the message as follows: ### Node Parameters -| Name | Type | Description | -| --------------------- | ----- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `classifier_type` | int | if the value is `1`, cnn_classifier is used | -| `data_path` | str | packages data and artifacts directory path | -| `backlight_threshold` | float | If the intensity get grater than this overwrite with UNKNOWN in corresponding RoI. Note that, if the value is much higher, the node only overwrites in the harsher backlight situations. Therefore, If you wouldn't like to use this feature set this value to `1.0`. The value can be `[0.0, 1.0]`. The confidence of overwritten signal is set to `0.0`. | -| `classify_traffic_light_type` | int | if the value is `0`, the classifier classifies the vehicular signals. if the value is `1`, it classifies the pedestrian signals. | +| Name | Type | Description | +| ----------------------------- | ----- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `classifier_type` | int | if the value is `1`, cnn_classifier is used | +| `data_path` | str | packages data and artifacts directory path | +| `backlight_threshold` | float | If the intensity get grater than this overwrite with UNKNOWN in corresponding RoI. Note that, if the value is much higher, the node only overwrites in the harsher backlight situations. Therefore, If you wouldn't like to use this feature set this value to `1.0`. The value can be `[0.0, 1.0]`. The confidence of overwritten signal is set to `0.0`. | +| `classify_traffic_light_type` | int | if the value is `0`, the classifier classifies the vehicular signals. if the value is `1`, it classifies the pedestrian signals. | ### Core Parameters diff --git a/perception/autoware_traffic_light_map_based_detector/README.md b/perception/autoware_traffic_light_map_based_detector/README.md index b6ba14df262be..6a8e7bb476f03 100644 --- a/perception/autoware_traffic_light_map_based_detector/README.md +++ b/perception/autoware_traffic_light_map_based_detector/README.md @@ -29,16 +29,16 @@ If the node receives no route information, it looks at a radius of max_detection ## Node parameters -| Parameter | Type | Description | -| ---------------------- | ------ | --------------------------------------------------------------------- | -| `max_vibration_pitch` | double | Maximum error in pitch direction. If -5~+5, it will be 10. | -| `max_vibration_yaw` | double | Maximum error in yaw direction. If -5~+5, it will be 10. | -| `max_vibration_height` | double | Maximum error in height direction. If -5~+5, it will be 10. | -| `max_vibration_width` | double | Maximum error in width direction. If -5~+5, it will be 10. | -| `max_vibration_depth` | double | Maximum error in depth direction. If -5~+5, it will be 10. | -| `max_detection_range` | double | Maximum detection range in meters. Must be positive | -| `car_traffic_light_max_angle_range` | double | Maximum angle between the vehicular traffic light and the camera in degrees. Must be positive -| `pedestrian_traffic_light_max_angle_range` | double | Maximum angle between the pedestrian traffic light and the camera in degrees. Must be positive | -| `min_timestamp_offset` | double | Minimum timestamp offset when searching for corresponding tf | -| `max_timestamp_offset` | double | Maximum timestamp offset when searching for corresponding tf | -| `timestamp_sample_len` | double | sampling length between min_timestamp_offset and max_timestamp_offset | +| Parameter | Type | Description | +| ------------------------------------------ | ------ | ---------------------------------------------------------------------------------------------- | +| `max_vibration_pitch` | double | Maximum error in pitch direction. If -5~+5, it will be 10. | +| `max_vibration_yaw` | double | Maximum error in yaw direction. If -5~+5, it will be 10. | +| `max_vibration_height` | double | Maximum error in height direction. If -5~+5, it will be 10. | +| `max_vibration_width` | double | Maximum error in width direction. If -5~+5, it will be 10. | +| `max_vibration_depth` | double | Maximum error in depth direction. If -5~+5, it will be 10. | +| `max_detection_range` | double | Maximum detection range in meters. Must be positive | +| `car_traffic_light_max_angle_range` | double | Maximum angle between the vehicular traffic light and the camera in degrees. Must be positive | +| `pedestrian_traffic_light_max_angle_range` | double | Maximum angle between the pedestrian traffic light and the camera in degrees. Must be positive | +| `min_timestamp_offset` | double | Minimum timestamp offset when searching for corresponding tf | +| `max_timestamp_offset` | double | Maximum timestamp offset when searching for corresponding tf | +| `timestamp_sample_len` | double | sampling length between min_timestamp_offset and max_timestamp_offset | diff --git a/perception/autoware_traffic_light_occlusion_predictor/README.md b/perception/autoware_traffic_light_occlusion_predictor/README.md index ce2a103d4715a..0a51de3dc2e3a 100644 --- a/perception/autoware_traffic_light_occlusion_predictor/README.md +++ b/perception/autoware_traffic_light_occlusion_predictor/README.md @@ -12,19 +12,19 @@ If no point cloud is received or all point clouds have very large stamp differen ## Input topics -| Name | Type | Description | -| -------------------- | --------------------------------------------------- | ------------------------ | -| `~input/vector_map` | autoware_auto_mapping_msgs::HADMapBin | vector map | -| `~/input/car/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | vehicular traffic light signals | -| `~/input/pedestrian/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | pedestrian traffic light signals | -| `~/input/rois` | autoware_auto_perception_msgs::TrafficLightRoiArray | traffic light detections | -| `~input/camera_info` | sensor_msgs::CameraInfo | target camera parameter | -| `~/input/cloud` | sensor_msgs::PointCloud2 | LiDAR point cloud | +| Name | Type | Description | +| ------------------------------------ | --------------------------------------------------- | -------------------------------- | +| `~input/vector_map` | autoware_auto_mapping_msgs::HADMapBin | vector map | +| `~/input/car/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | vehicular traffic light signals | +| `~/input/pedestrian/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | pedestrian traffic light signals | +| `~/input/rois` | autoware_auto_perception_msgs::TrafficLightRoiArray | traffic light detections | +| `~input/camera_info` | sensor_msgs::CameraInfo | target camera parameter | +| `~/input/cloud` | sensor_msgs::PointCloud2 | LiDAR point cloud | ## Output topics -| Name | Type | Description | -| -------------------- | --------------------------------------------------------- | ---------------------------- | +| Name | Type | Description | +| -------------------------- | --------------------------------------------- | ------------------------------------------------------------ | | `~/output/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | traffic light signals reset according to the occlusion ratio | ## Node parameters @@ -36,4 +36,4 @@ If no point cloud is received or all point clouds have very large stamp differen | `max_valid_pt_dist` | double | The points within this distance would be used for calculation | | `max_image_cloud_delay` | double | The maximum delay between LiDAR point cloud and camera image | | `max_wait_t` | double | The maximum time waiting for the LiDAR point cloud | -| `max_occlusion_ratio` | int | The maximum occlusion ratio for setting signal as unknown | +| `max_occlusion_ratio` | int | The maximum occlusion ratio for setting signal as unknown | From 5c3fee0f13ce4cbaa9d6a3e615995d60b2337d86 Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Tue, 2 Apr 2024 12:01:53 +0900 Subject: [PATCH 03/37] fix: spell check Signed-off-by: tzhong518 --- perception/autoware_crosswalk_traffic_light_estimator/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index 8dafabea7e844..9bc071b26f074 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -26,7 +26,7 @@ | :---------------------------- | :------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------ | | `use_last_detect_color` | `bool` | If this parameter is `true`, this module estimates pedestrian's traffic signal as RED not only when vehicle's traffic signal is detected as GREEN/AMBER but also when detection results change GREEN/AMBER to UNKNOWN. (If detection results change RED or AMBER to UNKNOWN, this module estimates pedestrian's traffic signal as UNKNOWN.) If this parameter is `false`, this module use only latest detection results for estimation. (Only when the detection result is GREEN/AMBER, this module estimates pedestrian's traffic signal as RED.) | `true` | | `last_detect_color_hold_time` | `double` | The time threshold to hold for last detected vehicle traffic light color. | `2.0` | -| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedetrian traffic light color. | +| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedestrian traffic light color. | ## Inner-workings / Algorithms From dd479a6d5b74591ab7ce4b04747a611417c7ad42 Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Wed, 3 Apr 2024 14:51:54 +0900 Subject: [PATCH 04/37] fix: add image Signed-off-by: tzhong518 --- .../images/flashing_state.png | Bin 0 -> 24574 bytes 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 perception/autoware_crosswalk_traffic_light_estimator/images/flashing_state.png diff --git a/perception/autoware_crosswalk_traffic_light_estimator/images/flashing_state.png b/perception/autoware_crosswalk_traffic_light_estimator/images/flashing_state.png new file mode 100644 index 0000000000000000000000000000000000000000..7686f3842e75c92cfd27f9de08ef1ecf6eb3651f GIT binary patch literal 24574 zcmd43WmFtdw>C&ZfB?bW-Q696LvVL@cemg!!L^a#8r&hcySr;+jXTqM@4espW@gR5 znYE^Vbam}Ib!4A?PVN2dr#n(nUJ?Ng7Y+gf0zq0zOc?^=GXVU(<0~}yb0)Sr9DMrX zA|kE&6@2)9HH`pYzlPMH*n2k&=^$+o8Nu z`gs_lbZBi*%w$xD#JbCDw_?ty_x$4nk4jC7c2_K^F&xwO=1+M`&$mZXXp+Ofd%7lE z66Jjg`g^q|SOYw5bN2N21^8U}9I`$52|citc6D_Tl7t|Od?oQ--tI#D{`KF*E4$P# z&3_3)$pPSdA;<+9{~wFEJZnx*dPBxi9jl)?~=@vhx9(%{-=$ z3hh6)A=frMw&YAr3q@bt%ZJd2Sy*QB(P01eJtB~SkFQ?mAjg3|8j#rUI-nD{2bPDQ;ez+IyTNqovHi)AB zX%jlE>k`zUR(t>P8XUtGcmDncg6IFZ_rR|NZvNrj7olLPe2Z%|&@F7aMTN1`z+PQ- zKPn$tb_zV3)9|KF@Q(7xHkpx3IpeLq)&Nr2mjAeqgSUMhD+2Hob*7$ubo^ZvY21+O z<}1D$ln)I~5kULzRs}CX{5L{ympau~m7+cdJKKTqXYcmpN*nkZ;jf)x!B{)*UP4rK zx6qybgk)0bm*Wij0x3rZyLkt!X}XfvJvO{S;gDE#TJxe7z)GExIOBz{EmMs(WGLV< zi5Z;U^t%wNEYQufrh^F2(f5vb#02;|jy{(`s)gp8ggHtg0S1+-0nNz3l_jPBm`%-{ zY?%Hvfpg7)DbwPoBlR!iTat{ay7OCz&3zk>Z?Am!{(mc6MN)u`c27$_fy(R=YoX$ zh#8|(=#+al#rj%v8>;sXgR@1;p`=W_wuYNEAo;-P$c3zyDxhU^wwk`+kkYx3vPp zW|SSa{daM+2GEV^H$`->=F+b|#e9zZ`GRQTdU_|uF5AQBPzH|)S zAtscoer(G0u@%pjR24NxAUD=wSS!jAU*QT*mylCnY_9q*k3snfWQ*dL*uzk1T8Fb{ zwijuS0)Ad4{eK33%qEcLsjddHZj03u4MTbPYiS&=w-ZwoRk^{5m`l{(wT#l@)8aU# z$v1Ft7ei3f<59rY2QjHnd{G6Dk5f(Z?9ZS?mVdpb?N0C>m{J0q?l|nhU+R%fXGX74 zPEkqAZV|27nbg4xuba}}<~qR|78RX;(=L$c$*3v!{OGI4=RfLciZJ4m5(GX0hb$sj zs(M>?`orS(7suSM5jAR^&&A(6Eb$%Q<_rhpmYBs`dxdobE`;W`eJarQ^j2x%`39)e zF4C2@cIvEzcjR@&kT6lNKS-60t)Z>i827P$jtL7M7{vV*T5L0j~*Jz1eB}29Nm-v2CBic zem{RQt!${?JC-@jv-vSm<(y!NeJhY)}XaSniDk2S0TUMz-7VYAgO{ zT5JSTJr3HF_O=jRe$L%{XUONXeyl76@O-d`Y!9uM^1ng)s39ob3_5qC{WYs= zQcLg+jkS;{cv) zwGVZPMWQJ-pD4Z_r1*neiPzxD8ej_$%f7==Vdzr*QgOxgmhsK0If2=M`cz}#)vezV zbHYK2uyuL2IF~o2%E#tW-vge9X+7$4Vf@}m<6?yR;Hg`G-|@hbvSsn+sFzkA0e7-Z zjKBrAIXbf}kZX)bT{#$+`Fg1TaTa-3J&tVWbmO^Rmv~@sb2mEMTegqDZa~i?R7Jy{ z1el`J+U?~1eSL$oms7R&W2&*k^@zE&g5*Z1f}6vdoTzdhWqp~fo%&B=NWcrA)2}f( z@ZV01KkMSrXal`Big7JiL3}AofMAt?rUdJKt?$|{Y|Ig{Jq$VvSP5=%nn>a~x1`c7 zJyN#wTRUQ?G5+AXi@75h@Qp8i;X}b^Pi$`;w-~V7iu4=w{Y7@)6A~KRmg@SM+4)N} z(JXVrDQSD$*7hsGj^m1JU{Y8J2HJ9W|M^fM+E3vaH@i~kltzS)MeA(b3+=5N$bimi z&#FGjYxQ!#MlD&^-5_G89r=u0S_H0O%*#+;5iMWJc_q7Jb1ty45be85JG=*9@U`e( zT+zZA1I!jo!&0I=gll#G)5Bj>rT-X}lEC6#S*s8G*k##HN)6MWV<7xDH-g?b4+M*Q zp;+>9$vy-JM?%36zUtC_KmWjPU!M%Nrm<{ZkH|#$yM}h|X(2|Fvf$C*N0Hv=zBk=^ zt-Tq%f6c?yVd{ZAD()FF@}gI`0e(n_%aNi^k_Ryph}?ij*D~9-?|Zx-Tf;NE+V1&Fkj^#^I1zys<^`o&Pn~Jy9fUIvP9z-V#;;iWWGTplE)q$ z&z`c(BaeopNnI-``Hdl)ua8f61Z|%@g2x{5blpv0wx$=af0PI#0A2M@cD=LpsPTsM z@*npCu4;%Xe1$BTJzjmkdc+eAuH*ck$zilGQ%aJPjNh-);1DA!+f}&{zmBUodv{3L z_|OYev8mFy7O{|8t@YPFC7iE536Lq=e8^lS7oBlZKsdfd-c_ zOfhY=j+TBbDr@u&3aKmen_^d}dn2!zw&*dmN!j^Oxw)AKq0PU)V(MoKRSwhw%T#;Z zJaukZV0HvYBlF$dCzBz6>*y+EN&}ioSUS`ph?#AIk9^f;O$l) z8J;{aa8{SpEy8a7oOy(fIixs!dN-x8X-ZCdD}g_IK5#VlFD>B{{x!U9<19du7AOgkC_r&+364n)$=Z8W{ZxEE)|EYCA5aeAt>W^ zoI~|-Q&BdpU*;FNPGIrA>Fx}X?;rJjbqrMrcna)@o=|W)ys77ZeC6uD=w3Hp_mCwR z{BpV+@F(*SX|;7+62_*~6?fE6PxouroY@?~yHli!o8!>@FS$hd?^PsI&QL9%H6R#o zda|~*UnF>7YGO^1jL&ka$1`@n+&Pw=?*jD!WM3DBK|4PXIsKR_%7(d-z@BQ)`i0ZX`=CwHoWAQt+w)KFDa48zK_pnn9AA)K?_qad?u0h?jHzCcsxdp(jdV(c*IC46B*u@M(xgs5Rd+RC4+Rj2tPZ7^p0`O=jUoG= z2&KBoWAJ}$<^8g7_!mAv4(2^SKO=7RIGruCZ2DXZs8?w(jHt}3She?OL~j=m^b-8m z0Zvqu)Xh#(SXd0|X9Z|YJsdGAx#O&Fu*Y=HWcRkwsj4sbG03-Y%82>%Hv`l2WgX8S zwX-0Wf!rqbE>>5D{{BE@_KmJ_CCSm^tc3-q;&FM3;EX)H*@6yjd!JA?v$V2GNRQ@x zxy`Ii=g`p~j&7Y@x>3zKk+16+YF2xZ=aFK1|D5Zkk;zH(@pKk32?^eY0H#|kgkH6h zos0V>Cp(^3frwhdOkmonRqULhu8y1_uk!_-qq}i$3qZH9BO1ihTZXo@KYvv^E@;v{ zT*0(9+zwP1{8-ZQ*ERpqIb33}6u9>F0$A2q@}D(xZkksoRIV~I);?V3j%f(<{kPFY za35c49uoUxUk2YscHO1^A~v4u^3A2t`IFO zftG3*uhZh_v3OBG`?F-%a+!VROy4dU{N)CO5~Qv)pNpi^#pO@uv&ZWxr;%QjK{PiOZ3<5(l3Z|s6DXw#$K24jrQ=< zCA8bj``!nwr1S7{a&T+OSOJpBKIVg^_Mq;*avZBNek==tQTc1GwaG&(#X4!7}M0>d1tBC^_ZQ7N1HV-a3hBH{8sImfZx&?_#@;ekv|7b zBobQ6zCvLDtJyIH*B!rYO@|f|S$dH|-87+NhH+l-3AB&^+`|kQvi2g=b35!syp3h^)p`9v&VnTJQnAxI+#Yw2ec}(qo6kgGz+r|nnWvC<{)$K-x zn%am&L~F@ss}E%6${l9S><6?b_1OeXLs*yLs0e;h=en?HPC&tv7{%IV;|d$-#uZ=5 zw^c51|LxY4|3yLx-jWy1MH+f0zF2V^Q_S`j)p-4TZTj$Ls<5#U4P!BbnEURo_FyHU zywPM0o07#KY7%xdpBegvZ|?1ptR{1}BFo;Sj2nQt9u39_DU zPnX}wBR8Zqij~4RNvEp9V-gE}w`jDsIi-gW=gAaiz9mvUujbsoUju4qocZG?2mNO)q1$qDGyXGlqbTtD18AcmerXjSZ{;zW;5pof|8`3@A!x2U{sKuz<9dw9(Llxg&c(spsr{z~^ zlMr1R7*Y?BKP>qr2C}Z8rcgPWf`0pOq7oGK9CQRdh;qc@(WWCE7&FqDcvvya9ZI0S zBm|i$aE9$$$}bC9Y@uFGrbGuX5%*7gM^ymLDOnHr$q}w$>Au0m^}IwvBnq3@Um|p9X5=xC)s%d2$kpE+UFD}3i<7_>6fG1f_ zt0EGA23x!VDTLC9kvEE$sql_zpuTnAw|G0L1t7BmhtjO(}3IfdeUN+%&tqIeRc_JgG(e>pKsP;D;wZ|^m*+jq2;xZp# z=(p>yk)WAdFlbkz6W0B`4cq^{{avt$B|l`Uqf52ph$6|5?bluCkLd!_#1hU9f5gAl z%xC!wXFgmVAjxUgg28Z*Vfn8!Q3o{E_SlDM`@^?830r;GvoZOB8jt{~V4Ri0mdxv> z?zNrMG2h+TsRXb75%0YGDs>W~WJP8>!34H>e~`yA|&qC-By1h2*|8_T`d%U z*ATX^a9?X{z4wAF6#NLY%Yy4tOAi`sM?07m0MG4Ta?$hM5)-knEH|0=FUJ`>6)-+f zqfe8BV^9q0&RiA=KI@Cz_?2Xbea2Bj()N2YB0aYte{{RA(EyNrVx<-6MGjw>Onaqh zBb_6p!+iG-P4s|D<|Xf_UO0SX=|W>pM+@cu%!o~#`5S~6th3eFg^}ErtuGu3jGe|2u#3ighW+!4e~LLH zrVT9d66N@Q2A$N4q~l1&Fs0wX*M3@#9)25-y3bEo+?v93T12}x^{fYYTxPIcIaZ6; zZWyL7u7n?asEzM@G{1sj0SGMDi{B6!MK)@Aq4M)Q;j7YAJkE#aV52LWc8v^{qa|c8gNcVE5Vqz$IiRaN|qhp^!x_IMD36dwl5bF?ukbu z#1Vg-$BX%B3#~s3C5BUWk@6-DC$A#x-bNO&5Rf+&Lw>F>%bO|4EmdydN}{iavMEbJ z!-P-WQ%C1pN-N6ZYr{M^+WlrJ$+5LK0`)WOlL}ty9Nats#omeGHwHwl7dr-3Et)sX zmd?iL8B}%~=*;+=l>q$L4tN%V@&OAgbeBn-k;mVWL8sWg3YS#fsxreeh~Hv)t(+b# zL78j_ef{V23bFF>K9R6}?N8a z4oW?)0f{{U*v`4T_Li3R=4E&6%+xiWWlW#n*&)YFg+nS|CAodj>hGAtJ?uz%1n~o6 zG3FLsQIBA@Kg6K9bTMny;;15i=))O(>Jeiy5S_#-qSkg0!R(w^Y;bJY0L?k;asxkH zll?AB5|PiRO6xv5bCTCy?fAgTaBLVVIUxwjxGP^BxG&a3kgUVg6ImoSkq7;R!7}FF zXj#!Bi|o#pSe~P?U00JTD>qv``?9&2EvslVuv>Yuj{hCPTG1vD#rxGCManSxL!@9& z>Tv?UIL`LvYg@R;l?QL&2|S&3ZKSWbg5pmuL8OWACFs9CurOo*k!XwvoxgKRdIkbJ zzZY5ReR)R`OL&yPY#l;pN3%e_o26W2I$q4kS#oWebo1e}5_iak{M-8GF|^aPtg}Qc zCPXX%5aE_$s2Lz*PBaVrMGrYYN6D(eKw&n8%XtxNGPjPYl~hmIsUNKE`q_rVlk2>( zqREg4UiQ=!`JQy~l`)vK#8Ews%U!)F9%+zMM)2 z-VlGz`h1JJYQE|oNDmLJ#~%uo#hX7w`N{}!?h%cu=;HSTguOEJB^wnjT0H@$9R>?@I;v|+6|e!p?6n{;c- z;m(^oa$I6lf`{75{@zog6-kx zduE+bY@J`IVZLCoi8D<`=Ga8oGDVlQnM;pZVxVr}@B68`yH&jNXBisH3wV;ziW97) z-uF(Z(&>TyxgUeSn|E;Ty_25eaa}*Lh(UY*?5w4##W)QVqx6xd^!G{1DsI>eq4&-n zkyliz`h4^!TlqVl)Z^Kq!Cez%E56!Ev5_%?FxUuwUkqI-MGttF%XJW*j!a$1kw1h9 z$asqc`Q=e(HMUYYX5d70Lr+7a?86-9oS0&WOrK5TA;t0{Kw7suJig4s(w9SDYVAxU z%9qRS>-(WAH`P@JVkv z@p{z|wv=I$*3J$DML^j^;>5Gp3xtb9r0L!~068kQ07XSUC8cvMoy=`3?Fa*;Uj?;+!D zp!2=o4cdQnSl&r;#?QrFy)q`+ebYHb25dEhcm7~xO+-;_eg9$0pJBnFSfZ%3u#m() z6UnQu>K{Z$LZy0c0;TX2~0=uR0nfkQqKZ#ZCqurz&gf)9Q{#Q6ihvLJ(l zzYwxwMw7XASXsI$N-KZnPjir80lB^YM~=z}z%_lYxs|Mts&>up>8z+uJZWk9zN-oH zR0qE-M})kSeaFmtq%p2yRKdv~hxI*=@_?AjK9DJGzU^vvpd%NCjFD0rD%V>Of~^+H z7Z_Yj{f9dw%H1d1b*N=crzop%Vx zz*~Ox1`#9%ODg?g%>3y?$>#NpuZ~UP4ir>|^a z0wIEq29xwFDXtsim#=lRG_HTskc4=C!X+{ve+9|iBW4F~;;^$3QWfS%vRYdVARhsP z%p`H8>9$JpDwa4|;udU-(_IC_`z8G{_*c??Qq9%M`sMf$4dxI1P*UO&MC)L4s}4Rd zlrh`2)XB!~EB;vRt;?W&3Xn1qx2BZbFkXHRFa|k7v**YPw9jG^bV8Fr=Lz3|4}z7e z4F9-TTIyccB^OJxoR+^G zEO|rt^v51B60&l-$~JC8{q?9C<7jtQH{b?OKro;>?bY zDzYkD%l`zV!48asTd5GpJ-5wb@;!Kw_J1djEJ21SP4_VUyiCyj#~30F}kdfZ0*!3V7#^; zt>#P9nY+cFH{_t5&2V*?(jMhrT1)6b@-P5ea$?nYRxQ!@9eLAmaJ2rHCXcv{m%=Cr6T~XsD_G8Z!TJJyBj? z_r6hL09W{&bbamo4H^>MzNeQLi}^Sk1@1L# za=NF`rixSCPG*Z{gE_yCXY-)_j2C)O_t^9*S)!{319R^?d+|btNeY5HGm=DR;BtNh zah5~l8EiFv;RX{u`d$Yq`arSMIjDUGO!%6PYdG+uRNWtMmlP-XKM8%#SW+4FM!|g@ z867@lvredKvz2fg0TDQLC1 zmZ>g&y09J)2zciQ>*Bww2&%|^VBN4bba8Rns~sZT`z1-(W5|5_`1DKdSZ_5EtZnZf@4TQ};0tbe0qVyYn|>O4#8pEIt#8llP^)=kKY$;!e5 zYdGRtD?CX*#O-00!}Zfq?oA((P(%CqLZON`uLEPe;4}5nRPN;r$%CLYKRg`VzQ(NF z=-OJ=&C!%hO$iw2WR5EMW3$Al?|nkCSYx2zi6cGdemoHgzTK+O($PI#^&vUAx!Jk3 zBMQ>P{}c{}cGK|GTg+&ANeaZ}Y9RCpC`YryDK!&B-XmnUoctJdp%)!nY24HUMi;yVKn9I4f)Vz(aSM`(rrTTp%A~^pXaI{iQ$VFYe7R zo=eThXb#ru{QNw4&44!+d&+xl>k2jYJ<1&hX^U>MNDIkiT&e8bL4T-uQJV;V(#%`m z?;O84g)$2X%wtASzeM9g$ThOnqMidp?OE#Wt`P)~F{p29+MDbTN6tQkQ z>&L8(Fe?=}_#^0Tz7;eD1qENC7-A-@U&D1s%Es9&Eqiw-A%yv7?GOIwfUdfP zds&cw6kyeP+S2Avx?DOsQ8FeM8BNJ}`n7;($NehZe)Tt#9&@^9EhqjLIsryAb9V=6 zlIOoloW6CD!%m$6BO3$e^`Il6j|`@+$Dq|!ZEKqvJ#Nu(M6>BUVL$8~u(@SLsZV{| zl2*m1>pk7=VK)$JX!e1CYaQ#0csGF(yxEQL87GL28bBoQT3EWO+pSPqzmPNM*ZNyx(R>L z6p&eqhMEbG-d#ayBi_|lWpNnW3Gv`&DiJIARBHcN{~wSDvEH`4*a?Qt!0l?<+8AX)kT;Im8!`=xVVg-qwUDhE(%kTGe zY)i!I&cq$8A8?uD5lBcVJ_&rcg2q{^IzF^B(nz|TZ?X7{bNBs=7L0Rk$8oR+T{!)6 zD@?Qtbyl4^+6uQT5!yLqzI1wYT)`x~u!=o715db%O45pjZdm-`Dd;FKmo5BupF<3) zc(=sr;ENDTqNh9LgrAa_yjvwF#(!Z(3*E5R0A23^WVDX3M@R`-YwjJ$(mHE)T!~oR z0Tl%Sx2oow-GS%{Z@y2^NQAPgs_50#)f4(eU!~1-v z*h~34u8&VH${Wi?l}_`$+t(Z}mN*6|64jV1`*%GcKb!{_g0&q(qaA%3w{-~g{O4~r zv7StD0~XoP4J4IJ)Ll)niD{zMax#|M>tH^~PZa-tv6p(#fxsD~`~Xv4c(nt>ul{+NE{M&kT+}X?n4eS!&+D)>i84-TucDtiN};sFsi5l9ax5j z|27&_g?m*nQH1J;Nr6_+McCIC`lV(rFY8o6@oFzbQUXP=ktDWJyOMfd`HweaPk^Np z#6XmFP0@V$_rKq9yU%dIT8D{^4(40TTl`FR`}jtR`~Z9>l!~Z_QIT|~Y4h)yt2ZKj zs0nzp88>J9_j)l1cyludrwBpbE%|HTb7jj;=dLAu6dON9PWdEJ`)Y8jxx_^B`xN>m z4&LG|Hx9L(Dr*DAdE~9~XEp+*^QK&%!w9C}57^H>4BR#u{A3AbCfbC=3Ey?oUBiF3 z_D+bSP6147>_G58igtZ*wr3t4kk&W!w&?`&Bg@3vIJWfE(6RHyU6MLw1O){jA0HJ} zROY&m;J_*87MkUzl7u=SeiC49q@FA6ComdPSYXVT zJpmk3H(YK=JrfrmcY63P3WavmC3{+l>9VH^1;w!XiQTN4YQ*^m!J?+gDR135bpu)8 zjahbu{k5~(r{dn?FKHO6qqRX@EC+)qhWs;NQ2)ZA$=Ww zl=%ar03lOD+FBz%n|w=2+?iTD7CuNdIE<=`vLlcIR6fH5xs?<2AQ@&Qg}LlnqSTs^ zIG+h3Ni(F%u9W{M-oYNmxIv=u1qys>jVyK|7%u1wyQn?j2d-75tvi4>HKj>^u{RlR z?*TChXH+X6ZAOHxY>!vy%{YMAz$W|dQ5Z@Sap)^(Z1@LUffgZ8Z~T6IQxfkQ)La%k ztPkWT#5XpT+X*akM zcIis!jaZtLd{-RPqnfrvo7 z-SZWdO9!O?2e5%hHgmUocW)7V%Ga_v_TOY_%|>grGIy)Ty8mdGSl8T=cZ-Xr|cc4ZGm(PXglot1UdXOARe4JZ#V(F(>woM<24@$ z_ZVt~t@{6jo^kRB)TOC-FLmR#sMGaM;3z zho!>f1iq_e2^P*20LP~=s|YX42-e+05HD%2v!wp11x1@+Q-$Ca;I>%sVEuL?foS)D zMj?9sPi6l_Ie!juCJ&GCaG_RS-DVszG#9`6cvaEQ=xN)1z zWbVj{1ch;JBRIO#@r-~9j~+yzn)q6Bt$SK%AVs^X-t2s$e%C&r5!l4X0aK=W>Lrv@ z3r%P*dtAQGz$SV|2`b&MNGSVvqs*-cBKF%;@^OSF6C4U676#tU#TIrve+uP#PCT|? zF*uqhoHdZ^nSSV|^^S3~L1iLxq_UJi5)3V5Pivt1EiH7h{oZK)t)}?uc_;w5_=mS} zRb+DX>u&SC(d-%m?>u39o`xXI&=XtLsb6aK=)s}gFJoeG{8ymRBGiz+pqF@;-7=lr){K?;y(T&3qCrgdTnNEdbj!&2xymH@VZ?3ojgm0LfJUof~ovJF-ca zRw;UCbJ!cBl5Knf-Y4;;+ts7(WKCm8uEaXV=$60Sx1EkZNLEQj&kGbZ|0iueh!w@A z$t?sJuc_ScB2Y;GFy-&0L}TUx+;*zt=P+1~jS)NfV}E+^+~rwG+osx;UNd?EdES%G z9!Vm1R+m2cKV^_lAPsudQ6-gx*IvnyrFOU$B1-7g<*S z`&ZR~t{vskLZ}%3@t7%-0_96MN+e&9cwhJ8#2wui-jFuC=+M) znX*>RXoRKF!JAFP0u9{N+~4M76z3;jNdfX?ONW0!{~YGT2>%w)b1@8;?$!dy4~n$p zClvbiY2W8SXX4OfssicNO<%!EGaFKro4H}NhJw?VE^ zlbPWB`j<}+ob%0pN>qlec$OMFZ9d8VUsLg1*)uwJJ&%VA%YLx>2_loA)G?zq=IR(q zgLjw+*#-lkpK*gCGzDr)IJoh;A=P zzeJ@3b?C|`V;EH}qTOlI=eyGl#yd;-tjdOhT$D>Z>?5QEvU{@)ED9!LC@5JlJw~b5 zp9D{)GX5AEkMR>Y#-WAFrmth_ttqTz4QH1iXb!=kv{S?Rox{32L|RO8GH|X@`HCPEsrwY!C#v-mc{D6(9oeXBwhjP!UxSbVCfq&`pPL9xs z&d|!O&d~I=J5N>g1Q3zL_|y}-e$9`aYFpA1z;cJwvtAH&6lQnZ(rd=G-< zxn?H(HDs^(hjaMJ%523`a5OJRvDtHB$QYg2Y4uO02d2mdm3eQz0wk+(`tY9X7kv#N z%&4w1VUIK-e-AlRO`U}vdlE9vm}DmQNAaOL-&5*Izh47U%dDlpM|eXyk~9B>66kyr zf^R7*=CV(O{Ql=o3vjJ;eVTt(qWsnIr`~_W%VcD1=qN`BIwm9J+LG4A?ii)jCS7Hy zO6`1T;uPhEq2Dx>c8^*W+Cu3%0KmBlTj7@k`g!(ihZNJa;NlJkjrxShd-Q4 z?)Y}dVti>y>KC6tywG?eBtLr=Wv$dRF8%KsPz;Lp2 zB|PljZ&YN1IpX9m&y9N1BDc)a{Nh3dhP6X~Zn{-qtvf7CoQA1CA6nE6SyH-rAA!LM z_lurg{L$jd1u;&>ZM^@@gKzZ*8MbG{mO<0+nJjlX2$0w#Fj3KG_w-7lvCLktToXjB z{O{TE%TwcCkdv&>v;OYGVWLB{=;iV7`HHTypvslB?o=^W4W*b#Z)vFo6k;<=&vbI;J^PB_7- zbMq}5|JX|{d?%EX@r)#i(J^R|4{TAXVRGfg%wZiUWl6+@Z&I2z z756Re3NU4S$%?FGQVqhSNb(JC{Fy>vRryqa*^Tv3`{H#OW->U0T*n_1F%c!IAft3M zC$(xT{xR-;-|5DY9~?3)Ef!LI$Hesuq5$Gkx4HQ}t8Sful%8jNtZe`xx zk~5wl%T2<~6Q0oNqvNYQ#yT_I%B-jzLUCIXdCi?N-J2Y+^(qgloaaDC>9^jK-|fJS ze~_hKH3<7v>kcV?qf9{EDuLKE61Ny{b7^IThN2f_e{CDroE#~So6NQz{k z2P{3O8Mm|Wz5IFYj^cU!Nc~86{k$DC>4%0h3{$Y z*EzYX#w;u|X-K-4TzB2=_|-yi=jQ|8XuC1F!dGt+oGs{InHxW5SMv90(gio;3t&kZ zXKGe_ia~ngyF-oCPZIZQe|^-yZn~Mv8nDNi5l&Hck7t!yh)mNwrQ?QUr%XB17^3Fa zX9e6f89dk~YflDd0U}MbY49urNEE`jWNv+Z{ib=Al-2du%4|-!G3Gv@MNuYO*;Yk8m09+u1(|@`=9uT6;fjEwkjp>1b_lZI1TZi&-0oR4{!ZcyBB>oPFclQ;$ zwq?kkFq>1N@a}|e-2amLscSq1 zdl`(Cy+uHKcsyV-SxrMUx(c%WBOb)01zBx~1YNeqjvz@Oe^^rv)~o^&N&!uVjlrA1 z=XZ$9cFM-YN_Tp5aI(MM8@;dHs*pb3Vh7FWi)>eHgq(U_u?>*mzgP+l6g=btC_hWm zn2Z#tesxWRG`24hxw;Jf7AI+GX@Q_^oTwHaXz+%`kwh=8hB)G6&AX4YXf|#vXP$w{ z1-31*%eIZ-#abVT(J*r)Cq&YQ=4L2D0I?|LEiI7im?Nw*UG|CUz^IE?=r)0TgvE_X z;e0s60Wc)R2nvL0^OFmQfSPbOTeHoH3132mm?mwrpkfs&_TH3?zN z3hFd25#-9;gT(dbz-WWGAd-?Y>%iwU>`Ye#XW7|Vb7?gv+c%^W49+sIO7K}e>WJd# z4UNHEqBvgQ(ZRtL6&)GwlXnP!agdoktmE)zJVkIixK+`#zx1BO(}mn?eS z1WYkui}1EW17f)j{Y-<-rQoA^bmjwU^dnzDrN*;K>!1q%4TfjQcHd68uXgR>RNN5N zdNX1bo)qusqVL<$Pt~?`QBEj!Vx)5ayAqzD<((`_30Gmaf0X5vfzu!36!>KWhU!{w zxg4J5V5uM%!`Uxhz5UX}u3ZROw+*PxJE{-wj*+dW{id`(tN*>KT(}BK;u!QIP8`Va z4j&2rRS}m-$1_+~Mp@-FOVpq2Yl$2p@ zmA{4bQ$`pYJ3xWsss|(MgE3Zi^mWZ;)N0zVhZNgfOo6JZw&C?&ek*X_vKA9NsUuo2 z(YD#2Bad@>w2h%MAF4GKavjeuo)cyD_3cD*GGsRw%_h9rWDhao@Il@SaI)V>4hN8p z^}nFQw+2WVkMNS6WXbk-V7TtRoj0;~B0!1f-z4E>&LZFTuJunWptyPqA$2|R%5bh5 zA{yvq3rx4$fi!gGNrXsxeLQa#Pz7t*@h1t9b)@M{q+c0$KSNUX6pP#w zG1nW+pxs44+T`)t%*4(9Ts&`As2-xG=pMUqQbphfU z(k7cm0Hs6|rPMzdS}bRHuYU`_7AB{q?Hw*U&-yy>k~o8cAjCKo2HDJ z(0bp${R$*BbU0hEI@#n2@hnF_Fg3DXrO*b*9L8~4Fd8~ z-@i|+(RX7-Ei#h4RZJ3!OB&`O;SPvRPGTkzjm2W$uWsKHf5tm{voa4uV?V@i#ru|C zPKkoOe8q5n6OH4knzh{GEj0Rwe7FuA#_-B}nCJY4$HyL;@3Is$$Gm!Jwt&h#*1ePN zlSwq!RC2&Su7m(P^u#*J5!>Acp<*;QT1|4Ck7GIihAn7ad!^hMxg;v^w9*;5?_TJy zTeU>?3Y%FbE|Bb^=aB17jko7CJ&`?g-!WPy{N+?N%H7;RgNb3So<_0paVqwc9i23>hzK^ zreHcz5U;?Fi$z1{%{=jEH^a?_pu(zFbOO#_^^Ab0iui)_>JnjVU(HvCXizOy?V-m> zIvBlWm=HafiEf-n7&$TD3~BUTZ1-IdlKmjS|LJ9MPd7j9kXOMK zw~iH*yBK!Rf)KwLw!-I5GJcg6CediFrC+NP&=%q@mK8CXTgi_4EVe*E5b^x` zUjS>%K2_XC_hTn_I~M4K*x1gbpQ?#X76?*_N*Vi{c8S8AG$Acr%qt1{9?9+=2orIu znQE(@dz@@VBJMFc<)&6>D&cKOv(;nCM~j+$Wuz%DFnK>G_<}N?{N85|d8WK3$L5Zf zrq)id*;utHn6{#3v5c3RD1&k~TytT!=>h^d{cZ<1?be0W5XIgX#RoQ#H*)(U+@Z?? z4xuuIG-@@}5vq}y2|1wLe)Xi}a?SWsi>Y!cJu72ylAgT_L&=r2Me4<(@|^ZH7G*3$ zB3f4rWn97hS?;+f;!zZ`Dk$gxY!6`9Dzw3#JMCB>Q+k#=)_ZvJudti8SXsvGmLVv13UWWL7_-G$c(~j@(W&wul(g#vuj9l$sGiNg z+X_9yw}Tw$^1UASC^$zS2=nYGzM$+I!!!mo=io6B^?ww=3+0(|m-4oR``3gwWuzQm zkRDzbPTd$qJ#g0D{0vTFip>dS`?Mf0x`r~VfA92XQQVN)QOzZ4+5ecz6JD}PJ;E=R zpi$rp=S%5N*K^v??TmcPUL3;kQ|jJo)w#Pd>iwy_c7NQVf@+$+9K8J2PC%xMKq1B| z^$+xK3{6T8Q<74fJAFdgCDuJB6$7i=zfHuHqmLkb1n;+PYKC0O2@_Q-t9iBI`_O&v zl)SvR_`u;yyQS~%py{hokuvq{n&v+iKysv_qEK6>rn+-Wh^_FPminmwqnNAwin4px zA|N@0fJ%3FcL);FEs{z%(j`L*NGe@Jr!>+G9rMyCF(Wl7-HbFd3}@coS!bQU;HM?$F!BFh&8z1tbF141|CAjyQQz>HlTha^-A;qiGQj$^AP*+EAQV^rw{M{5FuJV6PZ6Nf_TqvtxaIdESmu6sm={W59EU!TF9Yuo_kGF^yfeK zr>4Z?ZH|A-?Atr#RZ0TA6!vMFt+5T~oA`2TIexiNMa(Tx1(j|W+@}&e+bJ?MHIjWM z&j8Uk);8*VpVS$<5o%!jUF{s+>9&eEI@!-*bQmpzS*sY*-(80iL3&thz32-e}n7#tjo(KPY=hOaCy z2W>>+eUJf(PK3XFJlof&U~XZ7Vc-9J<}A$d2mV1fd%^n}ud~0B4}GtTI?H>e@DUp9 z^mL>izptOGMA}7m*NOH+d}Bvyo>{MglJ%h~jVV0ZhE#c~E$9)HknyN*q`Asz-Brer z?-?r@{#kkv%3k*`#dX|#mEQS&Rgo@CNNQ!kcd&bTa&qz)1{r2^+rXJrlYH(jTp~gP zT$x3Syn?=S4w(L!6D zi0;QhR~?33Au^3JRJP17{Q7;rG}Q6vvV{Qlh~M%(YiVh5_VB0xgFnvvr6S^dmWV(j zPvPf`6hv3}IyySr2nZu51}bLx{NsMz*o8sri_yA-h7Mnaz1kIIF|yF?Q}l9-lS? z_KolD77HGnt^0xU$gAVci~aeZp||2tC=}9vztTUgCD+bmb<7ID%l?vl#)(_y~39puJU$=E94#2>D$RDH@yt zho~tGH(K{jNcJI;gXwE~Huwu!tuUe_J;nQrD}vWLI+-?l*SThV0ipa$b#;8E4VK(c z3;}?H0XAYQ{IB2yh+KhBq-kB3{QmSCb0~W?cF3>AmNkYY?rPMU)_2Q?h2O+x(q5xk zwBBJ@GYvM32I04_<*MRITx=P!944%*Vli-XoC_I1)4>HLq#fM z2R;c5x~&qWx~;v*GeLyG`3112z@pL5bS|vzQbm80RlFF4ZDUB{_USJL{BTuYbFhnJ zCwky6vCms3dPm5L?+Fnea#h=jQRaQ!|=3*=08ySU(6( zNRx5;>^&Nsl-$$P97cbncc}Dy%0=P z4~RaKRX9)-5>ls{>9_I=%!_>ZvtXfMKJ70hVU7oVbCmitXi+@~JgS(k4U;(CQR=0b zw4c4T5RUQ}_cJqh4i4a=O90tk*VgmFEkeR^3VYul*j%NfCWPB}wZqSYoG99oCAr-i zhZW#(5(HDBEh)3rXhCjlTHU5jjy`9|KaiQ~==AV6Vu#R&=@FCn*tamw~qB*BD&Z!G*%J-_b1JF}Lr=In=2$bi&ZhphmcTeaK zFOoYeIPmX7Bu3w?KR$IOz_f1qsyx-R!Yj3$uYiIj4kz!U%q(tGZKq(w0P-8hK;{;)^a?+;*FQrj%iDnCWvXI#s|Tj!JM z~y~~LL8E?gb`ldozuZ(25IWBt!`)^zk2g)xAT!O3y4Nd zjT!=MV}#fi3r5sVD)GDj{`s{8xtWN+5@?}+s>*hy|GRe**m?{fio+ZYbLqy>I!a{KGhojGJeN_`KTG z&^L=JpOy5y5^$pnZ9!&4-_PSaj;@M$QeSY^*xU2Z^+Y_m+wXTM<{KXqK^2}`+`mvV z&)S|zO=Az3kn^R#m{D05EmqhWxrZrdWARzJTV)(B&ln@4K^O%7Db4B^bJDoyyq z4vS^eKe_7JpRm!${Y0i7SCtW`p72r4Z8uN85rW?5YK)w@z_GYP$jer&Xx@j768rSj zxqk>SNp4VMTol#hKU3yo699(18Rx*>_4z%(d~1EoVyYuarNwDUpS(l`G-7O4(x|s4 zA`?Et=KEHlwur9R!#Nfv{HeF7NDVj%Hgdk$0UPAvYWc-SHV~cXB->8 z!OJD~BrH%$_fyTnI*z$~rpEP0i}?rpbAz38@rSRU>wqH2OMvN#c~@M_UIbl}SJ6# zzpy_}=jP8Zf5Ywkj|Y$w&*-=fNIdzAZ<0?47=6Z68AlaiI=cftnze`3NCWC6G`@~S zBBGVFF#4jxG%K7JH_cc(71ku5HYkbJAhBs~^Kuh6T5eYY+(&OrmG>k3hxR4uH-fQ9 zHmY+><8-Z(p-dJX?UJ_t(zc@#>scKrZCRq#i>k$;D%(RZr|ZW_nY{;Yq}wCz9ERqv z=lT;H-ex{czv}GSIR6op#&&D?biu|Xo?l?eyMq@QDb-B$eh67xgXRSI*HGUn888E< zR0bUxIUl?vR5Pu?jm8Ff*jDPUA3PAGPsG0Ef>+(O6C+3W`DWV?$4BbTZ>P&2edDIJ zXvjRmOaJYdbYi0xKYB_Rd^G1sSortH6)n-sb8t}j-bQ}*eNcjxn2pW{y3HZGbGfmv!f;jTHyfBqu)PoNl=Pcb1!tp3QCGexym1z_ryi%h_~L4DZj6H z?&ZMj$ggy1L#6NQITAQl?|Ja>Eh6E5t;v;W{S7#yK5FL4bKGs9?eL`~w9`)^ssCHd zG~|zcBX5;!I%q33?%)*fX$YY(w7k}euOQNMIm#dirnO%d&6q1G)%@TQGxviG0dV*X zu@n$7x^m<%X-)u;6n97SvEPw-Dkzo1 zV>>Qy;!(Wb0{ri{C&^{Hi3>hC(g_|b^&fHi@urFt1v>v6KboNTf@2Pj6#v85Gku=4 zdH7pnRzF{Pd&~DNKXQAhA?|@pCM>)_i%K>tSj7ciNO{FPy(HXpy%m?7k~h ziToO%Gl)d1w(k<_NYlg{_A9=Ut4;gv;SY=LnUnX-QatA>6n!tV={ZFz`wN}$2`!3M zN-9%*VM3y-dw4k?CN*_w;tB36`OOHjn^<^Wjz|`|D_fC9`U)Bz!?PPT^; zGDns>ks!yj^oD}PVd{x@DRw5^!y-sV)g~Ua&II)Z7XUmUGJeWyv(M}V1S-c@G+co6 z61b&`YvY5YMj!e@v$6og4n9VnwF7^<6FTU@J7kVW z0;dMPFxAgL+|QD{`!d^SsoZB+>V`@C*;O+d41{GYT<-0ol0_$+&j9{i{W3^_!0mqQ zi+M%e#9jLqEzMIoCMut2 z=jrzn=mXn4jHxqMdA<>vT+EQs`_4Is$-+-F{!CFfSF!QR0)e@S58zGRIRYh-%2vt& zrJ{|qs-YVPjO!0>sOsWzF-1h32uUmbZZ9ZUv@p04B>#Fr$mnkPcd_pmm$iTiMy)Sy zY19xS)iPkyThr~Yg$t<+MdpDwooU8tMwCgD9%2lKeQSi@@y77gb{93{GZAWGk^*jQVlIvaVIM;IUgl@WOSJpe*`9^XHye@NmUna zDKM57?$)kM+vSqj$A}@5?q-^@GEy7*+XwGLMS;Ghq8LUD%LPG6^r~vpEMAId$Bg#U zpB3%Hi<{)R{CLg1k+HvR5aoy6=0sHO30FPtrfQwK@JC?FNP5zIDKAB*=Gqs3$;wi= zD^ne3;}q>QWf^FD@o>q$=j6)ODOX7vtfW31;YRc2hS@!AM(Y-W~h6z3#xnvc2O~kJV7+uP0tMHD`t!VSh$EkNG%H4RaUI zI%>wNv3)Du^I3V>iDs;H+g^|!`CRmwmPpd^x^=n8l_?tmEfsnY%=s%z=4UHn0d?s8 zoUQIy;W@p=N|gq+y5|~%hE`+@=}kq4GB3((aSv9Xfa8%5B*_ju|v{#xL9ndy=zbIm|r`*wN{X}i+k}!Phs3% zF@yL%TK@6GQ{G#+PLse&AU#{v)_a;@oApFD&KM01%x#0Ib@L@lcf#fXdy0J6_3$_3 zVkRXWKDm^2BI_XBYm#e|vrj17ny_U)e<*<(M|e5(B})fWsMQat<78hZk0$m8#i|S$ zc#i2R7WSIHJ_OK&MSKCTfk~L!u9&^1e}YK#I9l&ZSDLhFm3M-Goa;v;J!!&MwXaKA zXn>1RvWGMCt5W27ADsu?CqNk~(mIioG|Snm)VjjFQy--Rm=BgDL8qbfun}mdYO1Cs!$dC4w$?7Bs>Vm-)nj2qqMsa7GHexp-dx!v24GKxue9N(g-EMEa zKCykP1$;81+X|QwV0bCplB@o3p~n>*3T9n&wti3?=Yz@H=5piCPly!U?hBoTCjEY4 z|3fkoEz9o^LfUMt-?v#T=C(C#TD3XFPW*!WBL`_ainz^Zrf-$w?|D5}NX!O87+V^t zj59Oxz6OLMuzvOhHs`IqE%R|nWlF!Pj+}_#MVrTfW>WYA8pGd}%E} zeWV!MmUx^Eq5G|_Ln(-)5Gf)^u`y*Hk!R;Gv_)HamL02L&#Q5&ZeEh^N^ytPk;C!o zaRlD{nFevGP&jyz1a!rX)6^Raf!SE9OUbmci`;rt8O%@L^eZ zlSEP6WFM2P4t%>R^T)0Bg6f~IO2v*ns;uTAwq&`O=lHn<<6d_jbv3{bysPo*Fw89? zk!&DC$Eh9Y?Q;!GKLn-^)}RQ#n~2plmE}Y}Q}3os8ax zi8_-%23+E7Iu>|&cGa@``M}w9aLafHPCC47ZLCu4z#Et3ypzxIIw_sm-krAuaIYTi zm;0Z3qHMU%DrTnP$={ES`T2*De!LB=vcE1OQmz}%HM*$TTRRGsig2&bRYe(x^0pnX z&pjv^&-CkhxNWkN|K{UIsxXH*qK-j(GJ5ZSppEfn{5h15d(Qa8VFu2_uL@2!FTu#K zQT{8-h(?FBWctQza$sd{!XMhsqINrUOWfJ)W53lL+r!OaPq1D`^X)&n5*|H~9Yc|C zyL6%b3!84Kb|JQs7Cu6oUo}&x1MM1s0EX~06d(M+O78(xtfkb-kr}Wo7H8tl(_vPVd z!)&g7#G-IyYHjl!hW&%LA`rqWqU8+N#10XlLpfz2*ZEE}mn4({u$JPJ*^ z435QG>6wom81dY!aAM8hEEce|Gu{!b=Oz(ow;0?FvqY_vO|RpF8e!gYin`0hl7M6b zxi+kCP&_X^k>`v;_@{&_lwY9R7uoy{_8J&mPA$=VxFO3`bDaTYJl}^sb!?qIfhPmXCiZrF3ufFR<>huppQ@dT4%*C z(Dpw?`EtfSyIcg(sTc^aF-$s=^;de|Kcn!J>?F1Oun#h(D2p4KE?s@ptyTw(SPSH~gpZY*i!!-lAXNkf)abZrqd@Y8NrGnL&{i z*2z#^E+|eafKps9+@x6s&>M?p5xeYC+O{wMFHf*+Z7%mIheK6-+UJrH*T2T$VC{mH zy(Z^72i#CRnZG2;rN+T^Th~xOn@v+X(PwxgTG6SJ7E>+9*uF>?Vy^YWq5`)I+`izV zR2pusD?8vh3#6!bp2zJY<)1C}Cmh-S3wMR{A}9WZQKKAu((|yTy?46_ghOuTH)mG6 zAkPyEa^1rDcQxtYeM*Of%STVL=UY}M3EY+AUA_)r?BY(|bd5?eUd>;8BZ4|rNHG`+ z1LsG_%U*Vy;2~?BZq&+eD`DJg&KyDZoV1J&|e}6jU*HF}JlWTe^NWPOaTAy`#(1=bdoI>UG z@L!N2O~B~@NKhwD8P)vdKWLn4BvvouyrF`AFCDqZW5V39uj+}c$e2fz)BOM2({2R( zpEoh>ds9UJ)=Dvg`Tvh0jl7d5&1Of|87lwQ-IY<1`92#p-*`wBZkXul>91Sl{v+i7 zqu|(P?gT!T?7KUtZQN<%`_HH{8=Kw#1@fB* Date: Tue, 2 Apr 2024 03:04:02 +0000 Subject: [PATCH 05/37] style(pre-commit): autofix --- perception/autoware_crosswalk_traffic_light_estimator/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index 9bc071b26f074..514aeaf7813bc 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -26,7 +26,7 @@ | :---------------------------- | :------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------ | | `use_last_detect_color` | `bool` | If this parameter is `true`, this module estimates pedestrian's traffic signal as RED not only when vehicle's traffic signal is detected as GREEN/AMBER but also when detection results change GREEN/AMBER to UNKNOWN. (If detection results change RED or AMBER to UNKNOWN, this module estimates pedestrian's traffic signal as UNKNOWN.) If this parameter is `false`, this module use only latest detection results for estimation. (Only when the detection result is GREEN/AMBER, this module estimates pedestrian's traffic signal as RED.) | `true` | | `last_detect_color_hold_time` | `double` | The time threshold to hold for last detected vehicle traffic light color. | `2.0` | -| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedestrian traffic light color. | +| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedestrian traffic light color. | ## Inner-workings / Algorithms From 1a399999f8fbac9e181c40333c9d990177c2222a Mon Sep 17 00:00:00 2001 From: Tao Zhong <55872497+tzhong518@users.noreply.github.com> Date: Wed, 3 Apr 2024 12:06:22 +0900 Subject: [PATCH 06/37] Update perception/traffic_light_map_based_detector/README.md Co-authored-by: Dmitrii Koldaev <39071246+dkoldaev@users.noreply.github.com> --- perception/autoware_traffic_light_map_based_detector/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/perception/autoware_traffic_light_map_based_detector/README.md b/perception/autoware_traffic_light_map_based_detector/README.md index 6a8e7bb476f03..a7baf9538eaa7 100644 --- a/perception/autoware_traffic_light_map_based_detector/README.md +++ b/perception/autoware_traffic_light_map_based_detector/README.md @@ -9,7 +9,7 @@ Calibration and vibration errors can be entered as parameters, and the size of t ![traffic_light_map_based_detector_result](./docs/traffic_light_map_based_detector_result.svg) If the node receives route information, it only looks at traffic lights on that route. -If the node receives no route information, it looks at a radius of max_detection_range and the angle between the traffic light and the camera is less than traffic_light_max_angle_range. +If the node receives no route information, it looks at a radius of `max_detection_range` and the angle between the traffic light and the camera is less than `traffic_light_max_angle_range`. ## Input topics From 0163497b7286691cdbc3ae01e878a8ea562721ab Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Wed, 3 Apr 2024 14:58:30 +0900 Subject: [PATCH 07/37] fix: image size Signed-off-by: tzhong518 --- perception/autoware_crosswalk_traffic_light_estimator/README.md | 2 +- perception/autoware_traffic_light_occlusion_predictor/README.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index 514aeaf7813bc..0b4e2e818e9b7 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -104,7 +104,7 @@ end #### Update flashing flag
- +
## Assumptions / Known limits diff --git a/perception/autoware_traffic_light_occlusion_predictor/README.md b/perception/autoware_traffic_light_occlusion_predictor/README.md index 0a51de3dc2e3a..ab985c7b17c15 100644 --- a/perception/autoware_traffic_light_occlusion_predictor/README.md +++ b/perception/autoware_traffic_light_occlusion_predictor/README.md @@ -8,7 +8,7 @@ For each traffic light roi, hundreds of pixels would be selected and projected i ![image](images/occlusion.png) -If no point cloud is received or all point clouds have very large stamp difference with the camera image, the occlusion ratio of each roi would be set as 0. The signal whose occlusion ratio is larger than max_occlusion_ratio will be set as unknown type. +If no point cloud is received or all point clouds have very large stamp difference with the camera image, the occlusion ratio of each roi would be set as 0. The signal whose occlusion ratio is larger than `max_occlusion_ratio `will be set as unknown type. ## Input topics From 1597e17f77ed0589fcbffef6fb9cde9ca6dee9b9 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Tue, 3 Sep 2024 04:01:36 +0000 Subject: [PATCH 08/37] style(pre-commit): autofix --- perception/autoware_traffic_light_occlusion_predictor/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/perception/autoware_traffic_light_occlusion_predictor/README.md b/perception/autoware_traffic_light_occlusion_predictor/README.md index ab985c7b17c15..264ed209b81ca 100644 --- a/perception/autoware_traffic_light_occlusion_predictor/README.md +++ b/perception/autoware_traffic_light_occlusion_predictor/README.md @@ -8,7 +8,7 @@ For each traffic light roi, hundreds of pixels would be selected and projected i ![image](images/occlusion.png) -If no point cloud is received or all point clouds have very large stamp difference with the camera image, the occlusion ratio of each roi would be set as 0. The signal whose occlusion ratio is larger than `max_occlusion_ratio `will be set as unknown type. +If no point cloud is received or all point clouds have very large stamp difference with the camera image, the occlusion ratio of each roi would be set as 0. The signal whose occlusion ratio is larger than `max_occlusion_ratio`will be set as unknown type. ## Input topics From 15c8cba2765cdaafa37d8da3f43e1ceae075786c Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Tue, 3 Sep 2024 14:02:03 +0900 Subject: [PATCH 09/37] fix: topic name Signed-off-by: tzhong518 --- .../README.md | 4 ++-- .../crosswalk_traffic_light_estimator.launch.xml | 4 ++-- .../src/node.cpp | 4 ++-- .../map_based_prediction_node.hpp | 2 +- .../launch/map_based_prediction.launch.xml | 4 ++-- perception/autoware_traffic_light_arbiter/README.md | 6 +++--- .../launch/traffic_light_arbiter.launch.xml | 12 ++++++------ .../src/traffic_light_arbiter.cpp | 6 +++--- .../autoware_traffic_light_classifier/README.md | 2 +- .../launch/traffic_light_classifier.launch.xml | 4 ++-- .../src/traffic_light_classifier_node.cpp | 2 +- .../README.md | 4 ++-- .../traffic_light_multi_camera_fusion.launch.xml | 4 ++-- .../src/traffic_light_multi_camera_fusion_node.cpp | 4 ++-- .../README.md | 6 +++--- .../traffic_light_occlusion_predictor.launch.xml | 12 ++++++------ .../src/node.cpp | 6 +++--- .../launch/traffic_light_map_visualizer.launch.xml | 4 ++-- .../launch/traffic_light_roi_visualizer.launch.xml | 4 ++-- .../src/traffic_light_roi_visualizer/node.cpp | 2 +- 20 files changed, 48 insertions(+), 48 deletions(-) diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index 0b4e2e818e9b7..8c81ed68efa7b 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -12,13 +12,13 @@ | ------------------------------------ | ------------------------------------------------ | ------------------ | | `~/input/vector_map` | `autoware_map_msgs::msg::LaneletMapBin` | vector map | | `~/input/route` | `autoware_planning_msgs::msg::LaneletRoute` | route | -| `~/input/classified/traffic_signals` | `tier4_perception_msgs::msg::TrafficSignalArray` | classified signals | +| `~/input/classified/traffic_lights` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | classified signals | ### Output | Name | Type | Description | | -------------------------- | ------------------------------------------------------- | --------------------------------------------------------- | -| `~/output/traffic_signals` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | output that contains estimated pedestrian traffic signals | +| `~/output/traffic_lights` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | output that contains estimated pedestrian traffic signals | ## Parameters diff --git a/perception/autoware_crosswalk_traffic_light_estimator/launch/crosswalk_traffic_light_estimator.launch.xml b/perception/autoware_crosswalk_traffic_light_estimator/launch/crosswalk_traffic_light_estimator.launch.xml index 2e1437ecd7d93..9fb9c1346339f 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/launch/crosswalk_traffic_light_estimator.launch.xml +++ b/perception/autoware_crosswalk_traffic_light_estimator/launch/crosswalk_traffic_light_estimator.launch.xml @@ -16,8 +16,8 @@ - - + + diff --git a/perception/autoware_crosswalk_traffic_light_estimator/src/node.cpp b/perception/autoware_crosswalk_traffic_light_estimator/src/node.cpp index 5d9f06c3432b5..13f31d92b99a7 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/src/node.cpp +++ b/perception/autoware_crosswalk_traffic_light_estimator/src/node.cpp @@ -92,11 +92,11 @@ CrosswalkTrafficLightEstimatorNode::CrosswalkTrafficLightEstimatorNode( "~/input/route", rclcpp::QoS{1}.transient_local(), std::bind(&CrosswalkTrafficLightEstimatorNode::onRoute, this, _1)); sub_traffic_light_array_ = create_subscription( - "~/input/classified/traffic_signals", rclcpp::QoS{1}, + "~/input/classified/traffic_lights", rclcpp::QoS{1}, std::bind(&CrosswalkTrafficLightEstimatorNode::onTrafficLightArray, this, _1)); pub_traffic_light_array_ = - this->create_publisher("~/output/traffic_signals", rclcpp::QoS{1}); + this->create_publisher("~/output/traffic_lights", rclcpp::QoS{1}); pub_processing_time_ = std::make_shared(this, "~/debug"); } diff --git a/perception/autoware_map_based_prediction/include/map_based_prediction/map_based_prediction_node.hpp b/perception/autoware_map_based_prediction/include/map_based_prediction/map_based_prediction_node.hpp index 3078fe89444b8..8408129f13755 100644 --- a/perception/autoware_map_based_prediction/include/map_based_prediction/map_based_prediction_node.hpp +++ b/perception/autoware_map_based_prediction/include/map_based_prediction/map_based_prediction_node.hpp @@ -169,7 +169,7 @@ class MapBasedPredictionNode : public rclcpp::Node rclcpp::Subscription::SharedPtr sub_objects_; rclcpp::Subscription::SharedPtr sub_map_; autoware::universe_utils::InterProcessPollingSubscriber - sub_traffic_signals_{this, "/traffic_signals"}; + sub_traffic_signals_{this, "/traffic_lights"}; // debug publisher std::unique_ptr> stop_watch_ptr_; diff --git a/perception/autoware_map_based_prediction/launch/map_based_prediction.launch.xml b/perception/autoware_map_based_prediction/launch/map_based_prediction.launch.xml index 2c668639c2a56..915dc53002359 100644 --- a/perception/autoware_map_based_prediction/launch/map_based_prediction.launch.xml +++ b/perception/autoware_map_based_prediction/launch/map_based_prediction.launch.xml @@ -3,14 +3,14 @@ - + - + diff --git a/perception/autoware_traffic_light_arbiter/README.md b/perception/autoware_traffic_light_arbiter/README.md index 619154e1e183b..4260b50bfe9ec 100644 --- a/perception/autoware_traffic_light_arbiter/README.md +++ b/perception/autoware_traffic_light_arbiter/README.md @@ -28,14 +28,14 @@ The table below outlines how the matching process determines the output based on | Name | Type | Description | | -------------------------------- | ----------------------------------------------------- | -------------------------------------------------------- | | ~/sub/vector_map | autoware_map_msgs::msg::LaneletMapBin | The vector map to get valid traffic signal ids. | -| ~/sub/perception_traffic_signals | autoware_perception_msgs::msg::TrafficLightGroupArray | The traffic signals from the image recognition pipeline. | -| ~/sub/external_traffic_signals | autoware_perception_msgs::msg::TrafficLightGroupArray | The traffic signals from an external system. | +| ~/sub/perception_traffic_lights | autoware_perception_msgs::msg::TrafficLightGroupArray | The traffic signals from the image recognition pipeline. | +| ~/sub/external_traffic_lights | autoware_perception_msgs::msg::TrafficLightGroupArray | The traffic signals from an external system. | #### Output | Name | Type | Description | | --------------------- | ----------------------------------------------------- | -------------------------------- | -| ~/pub/traffic_signals | autoware_perception_msgs::msg::TrafficLightGroupArray | The merged traffic signal state. | +| ~/pub/traffic_lights | autoware_perception_msgs::msg::TrafficLightGroupArray | The merged traffic signal state. | ## Parameters diff --git a/perception/autoware_traffic_light_arbiter/launch/traffic_light_arbiter.launch.xml b/perception/autoware_traffic_light_arbiter/launch/traffic_light_arbiter.launch.xml index 8e2b9e8cf02d3..3ec9c13a4d1e5 100644 --- a/perception/autoware_traffic_light_arbiter/launch/traffic_light_arbiter.launch.xml +++ b/perception/autoware_traffic_light_arbiter/launch/traffic_light_arbiter.launch.xml @@ -1,16 +1,16 @@ - - - + + + - - - + + + diff --git a/perception/autoware_traffic_light_arbiter/src/traffic_light_arbiter.cpp b/perception/autoware_traffic_light_arbiter/src/traffic_light_arbiter.cpp index e71629fa5dd28..9898840089fea 100644 --- a/perception/autoware_traffic_light_arbiter/src/traffic_light_arbiter.cpp +++ b/perception/autoware_traffic_light_arbiter/src/traffic_light_arbiter.cpp @@ -85,14 +85,14 @@ TrafficLightArbiter::TrafficLightArbiter(const rclcpp::NodeOptions & options) std::bind(&TrafficLightArbiter::onMap, this, std::placeholders::_1)); perception_tlr_sub_ = create_subscription( - "~/sub/perception_traffic_signals", rclcpp::QoS(1), + "~/sub/perception_traffic_lights", rclcpp::QoS(1), std::bind(&TrafficLightArbiter::onPerceptionMsg, this, std::placeholders::_1)); external_tlr_sub_ = create_subscription( - "~/sub/external_traffic_signals", rclcpp::QoS(1), + "~/sub/external_traffic_lights", rclcpp::QoS(1), std::bind(&TrafficLightArbiter::onExternalMsg, this, std::placeholders::_1)); - pub_ = create_publisher("~/pub/traffic_signals", rclcpp::QoS(1)); + pub_ = create_publisher("~/pub/traffic_lights", rclcpp::QoS(1)); } void TrafficLightArbiter::onMap(const LaneletMapBin::ConstSharedPtr msg) diff --git a/perception/autoware_traffic_light_classifier/README.md b/perception/autoware_traffic_light_classifier/README.md index 2315ac17364a3..85b1c331fa590 100644 --- a/perception/autoware_traffic_light_classifier/README.md +++ b/perception/autoware_traffic_light_classifier/README.md @@ -52,7 +52,7 @@ These colors and shapes are assigned to the message as follows: | Name | Type | Description | | -------------------------- | ----------------------------------------------- | ------------------- | -| `~/output/traffic_signals` | `tier4_perception_msgs::msg::TrafficLightArray` | classified signals | +| `~/output/traffic_lights` | `tier4_perception_msgs::msg::TrafficLightArray` | classified signals | | `~/output/debug/image` | `sensor_msgs::msg::Image` | image for debugging | ## Parameters diff --git a/perception/autoware_traffic_light_classifier/launch/traffic_light_classifier.launch.xml b/perception/autoware_traffic_light_classifier/launch/traffic_light_classifier.launch.xml index d0cbbd3dcae9b..42ffff49848a1 100644 --- a/perception/autoware_traffic_light_classifier/launch/traffic_light_classifier.launch.xml +++ b/perception/autoware_traffic_light_classifier/launch/traffic_light_classifier.launch.xml @@ -1,14 +1,14 @@ - + - + diff --git a/perception/autoware_traffic_light_classifier/src/traffic_light_classifier_node.cpp b/perception/autoware_traffic_light_classifier/src/traffic_light_classifier_node.cpp index 796a144bf8266..cca9c810fdc6b 100644 --- a/perception/autoware_traffic_light_classifier/src/traffic_light_classifier_node.cpp +++ b/perception/autoware_traffic_light_classifier/src/traffic_light_classifier_node.cpp @@ -43,7 +43,7 @@ TrafficLightClassifierNodelet::TrafficLightClassifierNodelet(const rclcpp::NodeO } traffic_signal_array_pub_ = this->create_publisher( - "~/output/traffic_signals", rclcpp::QoS{1}); + "~/output/traffic_lights", rclcpp::QoS{1}); using std::chrono_literals::operator""ms; timer_ = rclcpp::create_timer( diff --git a/perception/autoware_traffic_light_multi_camera_fusion/README.md b/perception/autoware_traffic_light_multi_camera_fusion/README.md index f7ee294cda147..54df5e703cb76 100644 --- a/perception/autoware_traffic_light_multi_camera_fusion/README.md +++ b/perception/autoware_traffic_light_multi_camera_fusion/README.md @@ -15,7 +15,7 @@ For every camera, the following three topics are subscribed: | -------------------------------------- | ---------------------------------------------- | --------------------------------------------------- | | `~//camera_info` | sensor_msgs::CameraInfo | camera info from traffic_light_map_based_detector | | `~//rois` | tier4_perception_msgs::TrafficLightRoiArray | detection roi from traffic_light_fine_detector | -| `~//traffic_signals` | tier4_perception_msgs::TrafficLightSignalArray | classification result from traffic_light_classifier | +| `~//traffic_lights` | tier4_perception_msgs::TrafficLightSignalArray | classification result from traffic_light_classifier | You don't need to configure these topics manually. Just provide the `camera_namespaces` parameter and the node will automatically extract the `` and create the subscribers. @@ -23,7 +23,7 @@ You don't need to configure these topics manually. Just provide the `camera_name | Name | Type | Description | | -------------------------- | ------------------------------------------------- | ---------------------------------- | -| `~/output/traffic_signals` | autoware_perception_msgs::TrafficLightSignalArray | traffic light signal fusion result | +| `~/output/traffic_lights` | autoware_perception_msgs::TrafficLightSignalArray | traffic light signal fusion result | ## Node parameters diff --git a/perception/autoware_traffic_light_multi_camera_fusion/launch/traffic_light_multi_camera_fusion.launch.xml b/perception/autoware_traffic_light_multi_camera_fusion/launch/traffic_light_multi_camera_fusion.launch.xml index 32e3417cf9029..5d79373991013 100644 --- a/perception/autoware_traffic_light_multi_camera_fusion/launch/traffic_light_multi_camera_fusion.launch.xml +++ b/perception/autoware_traffic_light_multi_camera_fusion/launch/traffic_light_multi_camera_fusion.launch.xml @@ -2,11 +2,11 @@ - + - + diff --git a/perception/autoware_traffic_light_multi_camera_fusion/src/traffic_light_multi_camera_fusion_node.cpp b/perception/autoware_traffic_light_multi_camera_fusion/src/traffic_light_multi_camera_fusion_node.cpp index a1e06bcfb64ac..f7582045cf1b2 100644 --- a/perception/autoware_traffic_light_multi_camera_fusion/src/traffic_light_multi_camera_fusion_node.cpp +++ b/perception/autoware_traffic_light_multi_camera_fusion/src/traffic_light_multi_camera_fusion_node.cpp @@ -154,7 +154,7 @@ MultiCameraFusion::MultiCameraFusion(const rclcpp::NodeOptions & node_options) is_approximate_sync_ = this->declare_parameter("approximate_sync", false); message_lifespan_ = this->declare_parameter("message_lifespan", 0.09); for (const std::string & camera_ns : camera_namespaces) { - std::string signal_topic = camera_ns + "/classification/traffic_signals"; + std::string signal_topic = camera_ns + "/classification/traffic_lights"; std::string roi_topic = camera_ns + "/detection/rois"; std::string cam_info_topic = camera_ns + "/camera_info"; roi_subs_.emplace_back( @@ -181,7 +181,7 @@ MultiCameraFusion::MultiCameraFusion(const rclcpp::NodeOptions & node_options) map_sub_ = create_subscription( "~/input/vector_map", rclcpp::QoS{1}.transient_local(), std::bind(&MultiCameraFusion::mapCallback, this, _1)); - signal_pub_ = create_publisher("~/output/traffic_signals", rclcpp::QoS{1}); + signal_pub_ = create_publisher("~/output/traffic_lights", rclcpp::QoS{1}); } void MultiCameraFusion::trafficSignalRoiCallback( diff --git a/perception/autoware_traffic_light_occlusion_predictor/README.md b/perception/autoware_traffic_light_occlusion_predictor/README.md index ab985c7b17c15..dcb896fca819f 100644 --- a/perception/autoware_traffic_light_occlusion_predictor/README.md +++ b/perception/autoware_traffic_light_occlusion_predictor/README.md @@ -15,8 +15,8 @@ If no point cloud is received or all point clouds have very large stamp differen | Name | Type | Description | | ------------------------------------ | --------------------------------------------------- | -------------------------------- | | `~input/vector_map` | autoware_auto_mapping_msgs::HADMapBin | vector map | -| `~/input/car/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | vehicular traffic light signals | -| `~/input/pedestrian/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | pedestrian traffic light signals | +| `~/input/car/traffic_lights` | tier4_perception_msgs::msg::TrafficLightArray | vehicular traffic light signals | +| `~/input/pedestrian/traffic_lights` | tier4_perception_msgs::msg::TrafficLightArray | pedestrian traffic light signals | | `~/input/rois` | autoware_auto_perception_msgs::TrafficLightRoiArray | traffic light detections | | `~input/camera_info` | sensor_msgs::CameraInfo | target camera parameter | | `~/input/cloud` | sensor_msgs::PointCloud2 | LiDAR point cloud | @@ -25,7 +25,7 @@ If no point cloud is received or all point clouds have very large stamp differen | Name | Type | Description | | -------------------------- | --------------------------------------------- | ------------------------------------------------------------ | -| `~/output/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | traffic light signals reset according to the occlusion ratio | +| `~/output/traffic_lights` | tier4_perception_msgs::msg::TrafficLightArray | traffic light signals reset according to the occlusion ratio | ## Node parameters diff --git a/perception/autoware_traffic_light_occlusion_predictor/launch/traffic_light_occlusion_predictor.launch.xml b/perception/autoware_traffic_light_occlusion_predictor/launch/traffic_light_occlusion_predictor.launch.xml index d59d5a7717297..d9fae26b7fd55 100644 --- a/perception/autoware_traffic_light_occlusion_predictor/launch/traffic_light_occlusion_predictor.launch.xml +++ b/perception/autoware_traffic_light_occlusion_predictor/launch/traffic_light_occlusion_predictor.launch.xml @@ -4,9 +4,9 @@ - - - + + + @@ -14,9 +14,9 @@ - - - + + + diff --git a/perception/autoware_traffic_light_occlusion_predictor/src/node.cpp b/perception/autoware_traffic_light_occlusion_predictor/src/node.cpp index 8bc11fdea2aad..f6fa0223665dd 100644 --- a/perception/autoware_traffic_light_occlusion_predictor/src/node.cpp +++ b/perception/autoware_traffic_light_occlusion_predictor/src/node.cpp @@ -58,7 +58,7 @@ TrafficLightOcclusionPredictorNode::TrafficLightOcclusionPredictorNode( // publishers signal_pub_ = - create_publisher("~/output/traffic_signals", 1); + create_publisher("~/output/traffic_lights", 1); // configuration parameters config_.azimuth_occlusion_resolution_deg = @@ -75,7 +75,7 @@ TrafficLightOcclusionPredictorNode::TrafficLightOcclusionPredictorNode( config_.elevation_occlusion_resolution_deg); const std::vector topics{ - "~/input/car/traffic_signals", "~/input/rois", "~/input/camera_info", "~/input/cloud"}; + "~/input/car/traffic_lights", "~/input/rois", "~/input/camera_info", "~/input/cloud"}; const std::vector qos(topics.size(), rclcpp::SensorDataQoS()); synchronizer_ = std::make_shared( this, topics, qos, @@ -85,7 +85,7 @@ TrafficLightOcclusionPredictorNode::TrafficLightOcclusionPredictorNode( config_.max_image_cloud_delay, config_.max_wait_t); const std::vector topics_ped{ - "~/input/pedestrian/traffic_signals", "~/input/rois", "~/input/camera_info", "~/input/cloud"}; + "~/input/pedestrian/traffic_lights", "~/input/rois", "~/input/camera_info", "~/input/cloud"}; const std::vector qos_ped(topics_ped.size(), rclcpp::SensorDataQoS()); synchronizer_ped_ = std::make_shared( this, topics_ped, qos_ped, diff --git a/perception/autoware_traffic_light_visualization/launch/traffic_light_map_visualizer.launch.xml b/perception/autoware_traffic_light_visualization/launch/traffic_light_map_visualizer.launch.xml index 8ff56915766aa..1c580cd7ecbdb 100644 --- a/perception/autoware_traffic_light_visualization/launch/traffic_light_map_visualizer.launch.xml +++ b/perception/autoware_traffic_light_visualization/launch/traffic_light_map_visualizer.launch.xml @@ -1,7 +1,7 @@ - + - + diff --git a/perception/autoware_traffic_light_visualization/launch/traffic_light_roi_visualizer.launch.xml b/perception/autoware_traffic_light_visualization/launch/traffic_light_roi_visualizer.launch.xml index d4af7a27636df..be61276d58d7b 100644 --- a/perception/autoware_traffic_light_visualization/launch/traffic_light_roi_visualizer.launch.xml +++ b/perception/autoware_traffic_light_visualization/launch/traffic_light_roi_visualizer.launch.xml @@ -2,7 +2,7 @@ - + @@ -11,7 +11,7 @@ - + diff --git a/perception/autoware_traffic_light_visualization/src/traffic_light_roi_visualizer/node.cpp b/perception/autoware_traffic_light_visualization/src/traffic_light_roi_visualizer/node.cpp index 7ef13cf457c07..891011b8cac7a 100644 --- a/perception/autoware_traffic_light_visualization/src/traffic_light_roi_visualizer/node.cpp +++ b/perception/autoware_traffic_light_visualization/src/traffic_light_roi_visualizer/node.cpp @@ -76,7 +76,7 @@ void TrafficLightRoiVisualizerNode::connectCb() image_sub_.subscribe(this, "~/input/image", "raw", rmw_qos_profile_sensor_data); roi_sub_.subscribe(this, "~/input/rois", rclcpp::QoS{1}.get_rmw_qos_profile()); traffic_signals_sub_.subscribe( - this, "~/input/traffic_signals", rclcpp::QoS{1}.get_rmw_qos_profile()); + this, "~/input/traffic_lights", rclcpp::QoS{1}.get_rmw_qos_profile()); if (enable_fine_detection_) { rough_roi_sub_.subscribe(this, "~/input/rough/rois", rclcpp::QoS{1}.get_rmw_qos_profile()); } From 05b338b6999c333c23f6e2ff48cd6742338d4c50 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Tue, 3 Sep 2024 05:06:48 +0000 Subject: [PATCH 10/37] style(pre-commit): autofix --- .../README.md | 12 ++++++------ .../autoware_traffic_light_arbiter/README.md | 10 +++++----- .../autoware_traffic_light_classifier/README.md | 6 +++--- .../README.md | 12 ++++++------ .../README.md | 16 ++++++++-------- 5 files changed, 28 insertions(+), 28 deletions(-) diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index 8c81ed68efa7b..951bd08bd4dfc 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -8,16 +8,16 @@ ### Input -| Name | Type | Description | -| ------------------------------------ | ------------------------------------------------ | ------------------ | -| `~/input/vector_map` | `autoware_map_msgs::msg::LaneletMapBin` | vector map | -| `~/input/route` | `autoware_planning_msgs::msg::LaneletRoute` | route | +| Name | Type | Description | +| ----------------------------------- | ------------------------------------------------------- | ------------------ | +| `~/input/vector_map` | `autoware_map_msgs::msg::LaneletMapBin` | vector map | +| `~/input/route` | `autoware_planning_msgs::msg::LaneletRoute` | route | | `~/input/classified/traffic_lights` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | classified signals | ### Output -| Name | Type | Description | -| -------------------------- | ------------------------------------------------------- | --------------------------------------------------------- | +| Name | Type | Description | +| ------------------------- | ------------------------------------------------------- | --------------------------------------------------------- | | `~/output/traffic_lights` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | output that contains estimated pedestrian traffic signals | ## Parameters diff --git a/perception/autoware_traffic_light_arbiter/README.md b/perception/autoware_traffic_light_arbiter/README.md index 4260b50bfe9ec..2d185bfc4f9e3 100644 --- a/perception/autoware_traffic_light_arbiter/README.md +++ b/perception/autoware_traffic_light_arbiter/README.md @@ -25,16 +25,16 @@ The table below outlines how the matching process determines the output based on #### Input -| Name | Type | Description | -| -------------------------------- | ----------------------------------------------------- | -------------------------------------------------------- | -| ~/sub/vector_map | autoware_map_msgs::msg::LaneletMapBin | The vector map to get valid traffic signal ids. | +| Name | Type | Description | +| ------------------------------- | ----------------------------------------------------- | -------------------------------------------------------- | +| ~/sub/vector_map | autoware_map_msgs::msg::LaneletMapBin | The vector map to get valid traffic signal ids. | | ~/sub/perception_traffic_lights | autoware_perception_msgs::msg::TrafficLightGroupArray | The traffic signals from the image recognition pipeline. | | ~/sub/external_traffic_lights | autoware_perception_msgs::msg::TrafficLightGroupArray | The traffic signals from an external system. | #### Output -| Name | Type | Description | -| --------------------- | ----------------------------------------------------- | -------------------------------- | +| Name | Type | Description | +| -------------------- | ----------------------------------------------------- | -------------------------------- | | ~/pub/traffic_lights | autoware_perception_msgs::msg::TrafficLightGroupArray | The merged traffic signal state. | ## Parameters diff --git a/perception/autoware_traffic_light_classifier/README.md b/perception/autoware_traffic_light_classifier/README.md index 85b1c331fa590..871d57a4b5313 100644 --- a/perception/autoware_traffic_light_classifier/README.md +++ b/perception/autoware_traffic_light_classifier/README.md @@ -50,10 +50,10 @@ These colors and shapes are assigned to the message as follows: ### Output -| Name | Type | Description | -| -------------------------- | ----------------------------------------------- | ------------------- | +| Name | Type | Description | +| ------------------------- | ----------------------------------------------- | ------------------- | | `~/output/traffic_lights` | `tier4_perception_msgs::msg::TrafficLightArray` | classified signals | -| `~/output/debug/image` | `sensor_msgs::msg::Image` | image for debugging | +| `~/output/debug/image` | `sensor_msgs::msg::Image` | image for debugging | ## Parameters diff --git a/perception/autoware_traffic_light_multi_camera_fusion/README.md b/perception/autoware_traffic_light_multi_camera_fusion/README.md index 54df5e703cb76..6a4f26585a551 100644 --- a/perception/autoware_traffic_light_multi_camera_fusion/README.md +++ b/perception/autoware_traffic_light_multi_camera_fusion/README.md @@ -11,18 +11,18 @@ For every camera, the following three topics are subscribed: -| Name | Type | Description | -| -------------------------------------- | ---------------------------------------------- | --------------------------------------------------- | -| `~//camera_info` | sensor_msgs::CameraInfo | camera info from traffic_light_map_based_detector | -| `~//rois` | tier4_perception_msgs::TrafficLightRoiArray | detection roi from traffic_light_fine_detector | +| Name | Type | Description | +| ------------------------------------- | ---------------------------------------------- | --------------------------------------------------- | +| `~//camera_info` | sensor_msgs::CameraInfo | camera info from traffic_light_map_based_detector | +| `~//rois` | tier4_perception_msgs::TrafficLightRoiArray | detection roi from traffic_light_fine_detector | | `~//traffic_lights` | tier4_perception_msgs::TrafficLightSignalArray | classification result from traffic_light_classifier | You don't need to configure these topics manually. Just provide the `camera_namespaces` parameter and the node will automatically extract the `` and create the subscribers. ## Output topics -| Name | Type | Description | -| -------------------------- | ------------------------------------------------- | ---------------------------------- | +| Name | Type | Description | +| ------------------------- | ------------------------------------------------- | ---------------------------------- | | `~/output/traffic_lights` | autoware_perception_msgs::TrafficLightSignalArray | traffic light signal fusion result | ## Node parameters diff --git a/perception/autoware_traffic_light_occlusion_predictor/README.md b/perception/autoware_traffic_light_occlusion_predictor/README.md index b36cb3d78e852..d2473c6acf701 100644 --- a/perception/autoware_traffic_light_occlusion_predictor/README.md +++ b/perception/autoware_traffic_light_occlusion_predictor/README.md @@ -12,19 +12,19 @@ If no point cloud is received or all point clouds have very large stamp differen ## Input topics -| Name | Type | Description | -| ------------------------------------ | --------------------------------------------------- | -------------------------------- | -| `~input/vector_map` | autoware_auto_mapping_msgs::HADMapBin | vector map | +| Name | Type | Description | +| ----------------------------------- | --------------------------------------------------- | -------------------------------- | +| `~input/vector_map` | autoware_auto_mapping_msgs::HADMapBin | vector map | | `~/input/car/traffic_lights` | tier4_perception_msgs::msg::TrafficLightArray | vehicular traffic light signals | | `~/input/pedestrian/traffic_lights` | tier4_perception_msgs::msg::TrafficLightArray | pedestrian traffic light signals | -| `~/input/rois` | autoware_auto_perception_msgs::TrafficLightRoiArray | traffic light detections | -| `~input/camera_info` | sensor_msgs::CameraInfo | target camera parameter | -| `~/input/cloud` | sensor_msgs::PointCloud2 | LiDAR point cloud | +| `~/input/rois` | autoware_auto_perception_msgs::TrafficLightRoiArray | traffic light detections | +| `~input/camera_info` | sensor_msgs::CameraInfo | target camera parameter | +| `~/input/cloud` | sensor_msgs::PointCloud2 | LiDAR point cloud | ## Output topics -| Name | Type | Description | -| -------------------------- | --------------------------------------------- | ------------------------------------------------------------ | +| Name | Type | Description | +| ------------------------- | --------------------------------------------- | ------------------------------------------------------------ | | `~/output/traffic_lights` | tier4_perception_msgs::msg::TrafficLightArray | traffic light signals reset according to the occlusion ratio | ## Node parameters From 3b51931c3d550382564876e8096421e4cc8f2f46 Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Thu, 5 Sep 2024 12:55:06 +0900 Subject: [PATCH 11/37] fix: topic names in launch file Signed-off-by: tzhong518 --- .../traffic_light.launch.xml | 60 +++++++++---------- .../traffic_light_node_container.launch.py | 16 ++--- .../launch/map_based_prediction.launch.xml | 4 +- ...affic_light_multi_camera_fusion.launch.xml | 2 +- 4 files changed, 41 insertions(+), 41 deletions(-) diff --git a/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light.launch.xml b/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light.launch.xml index 9defa5f9607fa..892b668d48cf5 100644 --- a/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light.launch.xml +++ b/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light.launch.xml @@ -12,10 +12,10 @@ - - - - + + + + @@ -34,9 +34,9 @@ - - - + + + @@ -64,9 +64,9 @@ - - - + + + @@ -76,9 +76,9 @@ - - - + + + @@ -89,9 +89,9 @@ - - - + + + @@ -119,9 +119,9 @@ - - - + + + @@ -131,9 +131,9 @@ - - - + + + @@ -144,16 +144,16 @@ - + - - - + + + @@ -161,16 +161,16 @@ - - + + - - + + diff --git a/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_node_container.launch.py b/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_node_container.launch.py index 74b89e9298da4..0e471a77ca939 100644 --- a/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_node_container.launch.py +++ b/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_node_container.launch.py @@ -63,7 +63,7 @@ def create_parameter_dict(*args): remappings=[ ("~/input/image", LaunchConfiguration("input/image")), ("~/input/rois", LaunchConfiguration("output/rois")), - ("~/output/traffic_signals", "classified/car/traffic_signals"), + ("~/output/traffic_lights", "classified/car/traffic_lights"), ], extra_arguments=[ {"use_intra_process_comms": LaunchConfiguration("use_intra_process")} @@ -78,7 +78,7 @@ def create_parameter_dict(*args): remappings=[ ("~/input/image", LaunchConfiguration("input/image")), ("~/input/rois", LaunchConfiguration("output/rois")), - ("~/output/traffic_signals", "classified/pedestrian/traffic_signals"), + ("~/output/traffic_lights", "classified/pedestrian/traffic_lights"), ], extra_arguments=[ {"use_intra_process_comms": LaunchConfiguration("use_intra_process")} @@ -94,8 +94,8 @@ def create_parameter_dict(*args): ("~/input/rois", LaunchConfiguration("output/rois")), ("~/input/rough/rois", "detection/rough/rois"), ( - "~/input/traffic_signals", - LaunchConfiguration("output/traffic_signals"), + "~/input/traffic_lights", + LaunchConfiguration("output/traffic_lights"), ), ("~/output/image", "debug/rois"), ("~/output/image/compressed", "debug/rois/compressed"), @@ -176,15 +176,15 @@ def add_launch_arg(name: str, default_value=None, description=None): add_launch_arg("input/image", "/sensing/camera/traffic_light/image_raw") add_launch_arg("output/rois", "/perception/traffic_light_recognition/rois") add_launch_arg( - "output/traffic_signals", + "output/traffic_lights", "/perception/traffic_light_recognition/traffic_signals", ) add_launch_arg( - "output/car/traffic_signals", "/perception/traffic_light_recognition/car/traffic_signals" + "output/car/traffic_lights", "/perception/traffic_light_recognition/car/traffic_lights" ) add_launch_arg( - "output/pedestrian/traffic_signals", - "/perception/traffic_light_recognition/pedestrian/traffic_signals", + "output/pedestrian/traffic_lights", + "/perception/traffic_light_recognition/pedestrian/traffic_lights", ) # traffic_light_fine_detector diff --git a/perception/autoware_map_based_prediction/launch/map_based_prediction.launch.xml b/perception/autoware_map_based_prediction/launch/map_based_prediction.launch.xml index 915dc53002359..00b839e07355e 100644 --- a/perception/autoware_map_based_prediction/launch/map_based_prediction.launch.xml +++ b/perception/autoware_map_based_prediction/launch/map_based_prediction.launch.xml @@ -3,14 +3,14 @@ - + - + diff --git a/perception/autoware_traffic_light_multi_camera_fusion/launch/traffic_light_multi_camera_fusion.launch.xml b/perception/autoware_traffic_light_multi_camera_fusion/launch/traffic_light_multi_camera_fusion.launch.xml index 5d79373991013..430aa0056d9ec 100644 --- a/perception/autoware_traffic_light_multi_camera_fusion/launch/traffic_light_multi_camera_fusion.launch.xml +++ b/perception/autoware_traffic_light_multi_camera_fusion/launch/traffic_light_multi_camera_fusion.launch.xml @@ -2,7 +2,7 @@ - + From 1002e45c271b892bc0a4f973223632262fe4baa6 Mon Sep 17 00:00:00 2001 From: Tao Zhong <55872497+tzhong518@users.noreply.github.com> Date: Thu, 5 Sep 2024 13:03:18 +0900 Subject: [PATCH 12/37] Update perception/autoware_traffic_light_classifier/README.md Co-authored-by: Kenzo Lobos Tsunekawa --- perception/autoware_traffic_light_classifier/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/perception/autoware_traffic_light_classifier/README.md b/perception/autoware_traffic_light_classifier/README.md index 871d57a4b5313..ae2732eb99caf 100644 --- a/perception/autoware_traffic_light_classifier/README.md +++ b/perception/autoware_traffic_light_classifier/README.md @@ -17,7 +17,7 @@ The information of the models is listed here: | EfficientNet-b1 | 128 x 128 | 99.76% | | MobileNet-v2 | 224 x 224 | 99.81% | -For pedestrian signals, totally 21199 (17860 for training, 2114 for evaluation and 1225 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. +For pedestrian signals, a total of 21199 (17860 for training, 2114 for evaluation and 1225 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. The information of the models is listed here: | Name | Input Size | Test Accuracy | | --------------- | ---------- | ------------- | From 3993792e3d82d80cd1a57e40c7cb96edcbf19444 Mon Sep 17 00:00:00 2001 From: Tao Zhong <55872497+tzhong518@users.noreply.github.com> Date: Thu, 5 Sep 2024 13:03:25 +0900 Subject: [PATCH 13/37] Update perception/autoware_traffic_light_classifier/README.md Co-authored-by: Kenzo Lobos Tsunekawa --- perception/autoware_traffic_light_classifier/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/perception/autoware_traffic_light_classifier/README.md b/perception/autoware_traffic_light_classifier/README.md index ae2732eb99caf..c9cfd6c5ff002 100644 --- a/perception/autoware_traffic_light_classifier/README.md +++ b/perception/autoware_traffic_light_classifier/README.md @@ -10,7 +10,7 @@ traffic_light_classifier is a package for classifying traffic light labels using Traffic light labels are classified by EfficientNet-b1 or MobileNet-v2. We trained classifiers for vehicular signals and pedestrian signals separately. -For vehicular signals, totally 83400 (58600 for training, 14800 for evaluation and 10000 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. +For vehicular signals, a total of 83400 (58600 for training, 14800 for evaluation and 10000 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. The information of the models is listed here: | Name | Input Size | Test Accuracy | | --------------- | ---------- | ------------- | From e529247b56d761aa95e1c08828fc975a03d2d57b Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Thu, 5 Sep 2024 13:24:25 +0900 Subject: [PATCH 14/37] fix: descriptions Signed-off-by: tzhong518 --- .../autoware_crosswalk_traffic_light_estimator/README.md | 6 +++--- perception/autoware_traffic_light_classifier/README.md | 2 +- .../autoware_traffic_light_map_based_detector/README.md | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index 8c81ed68efa7b..eb14de539ee8c 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -11,7 +11,7 @@ | Name | Type | Description | | ------------------------------------ | ------------------------------------------------ | ------------------ | | `~/input/vector_map` | `autoware_map_msgs::msg::LaneletMapBin` | vector map | -| `~/input/route` | `autoware_planning_msgs::msg::LaneletRoute` | route | +| `~/input/route` | `autoware_planning_msgs::msg::LaneletRoute` | route (optional) | | `~/input/classified/traffic_lights` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | classified signals | ### Output @@ -25,8 +25,8 @@ | Name | Type | Description | Default value | | :---------------------------- | :------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------ | | `use_last_detect_color` | `bool` | If this parameter is `true`, this module estimates pedestrian's traffic signal as RED not only when vehicle's traffic signal is detected as GREEN/AMBER but also when detection results change GREEN/AMBER to UNKNOWN. (If detection results change RED or AMBER to UNKNOWN, this module estimates pedestrian's traffic signal as UNKNOWN.) If this parameter is `false`, this module use only latest detection results for estimation. (Only when the detection result is GREEN/AMBER, this module estimates pedestrian's traffic signal as RED.) | `true` | -| `last_detect_color_hold_time` | `double` | The time threshold to hold for last detected vehicle traffic light color. | `2.0` | -| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedestrian traffic light color. | +| `last_detect_color_hold_time` | `double` | The time threshold to hold for last detected vehicle traffic light color. The unit is second. | `2.0` | +| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedestrian traffic light color. The unit is second. | ## Inner-workings / Algorithms diff --git a/perception/autoware_traffic_light_classifier/README.md b/perception/autoware_traffic_light_classifier/README.md index 85b1c331fa590..d110391d12d4b 100644 --- a/perception/autoware_traffic_light_classifier/README.md +++ b/perception/autoware_traffic_light_classifier/README.md @@ -63,7 +63,7 @@ These colors and shapes are assigned to the message as follows: | ----------------------------- | ----- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `classifier_type` | int | if the value is `1`, cnn_classifier is used | | `data_path` | str | packages data and artifacts directory path | -| `backlight_threshold` | float | If the intensity get grater than this overwrite with UNKNOWN in corresponding RoI. Note that, if the value is much higher, the node only overwrites in the harsher backlight situations. Therefore, If you wouldn't like to use this feature set this value to `1.0`. The value can be `[0.0, 1.0]`. The confidence of overwritten signal is set to `0.0`. | +| `backlight_threshold` | float | If the intensity of light is grater than this threshold, the class of the corresponding RoI will be overwritten with UNKNOWN, and the confidence of the overwritten signal will be set to `0.0`. The value should be set in the range of `[0.0, 1.0]`. If you wouldn't like to use this feature, please set it to `1.0`. | | `classify_traffic_light_type` | int | if the value is `0`, the classifier classifies the vehicular signals. if the value is `1`, it classifies the pedestrian signals. | ### Core Parameters diff --git a/perception/autoware_traffic_light_map_based_detector/README.md b/perception/autoware_traffic_light_map_based_detector/README.md index a7baf9538eaa7..201fa791b378e 100644 --- a/perception/autoware_traffic_light_map_based_detector/README.md +++ b/perception/autoware_traffic_light_map_based_detector/README.md @@ -9,7 +9,7 @@ Calibration and vibration errors can be entered as parameters, and the size of t ![traffic_light_map_based_detector_result](./docs/traffic_light_map_based_detector_result.svg) If the node receives route information, it only looks at traffic lights on that route. -If the node receives no route information, it looks at a radius of `max_detection_range` and the angle between the traffic light and the camera is less than `traffic_light_max_angle_range`. +If the node receives no route information, it looks at traffic lights within a radius of `max_detection_range`. If the angle between the traffic light and the camera is larger than `traffic_light_max_angle_range`, it will be filtered. ## Input topics From 200573ce766fc67663ffc721632f6520d392a775 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Thu, 5 Sep 2024 04:28:21 +0000 Subject: [PATCH 15/37] style(pre-commit): autofix --- .../README.md | 12 ++++++------ .../autoware_traffic_light_classifier/README.md | 12 ++++++------ 2 files changed, 12 insertions(+), 12 deletions(-) diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index 1e0a635a3828e..646b2ced4a726 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -8,10 +8,10 @@ ### Input -| Name | Type | Description | -| ------------------------------------ | ------------------------------------------------ | ------------------ | -| `~/input/vector_map` | `autoware_map_msgs::msg::LaneletMapBin` | vector map | -| `~/input/route` | `autoware_planning_msgs::msg::LaneletRoute` | route (optional) | +| Name | Type | Description | +| ----------------------------------- | ------------------------------------------------------- | ------------------ | +| `~/input/vector_map` | `autoware_map_msgs::msg::LaneletMapBin` | vector map | +| `~/input/route` | `autoware_planning_msgs::msg::LaneletRoute` | route (optional) | | `~/input/classified/traffic_lights` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | classified signals | ### Output @@ -25,8 +25,8 @@ | Name | Type | Description | Default value | | :---------------------------- | :------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------ | | `use_last_detect_color` | `bool` | If this parameter is `true`, this module estimates pedestrian's traffic signal as RED not only when vehicle's traffic signal is detected as GREEN/AMBER but also when detection results change GREEN/AMBER to UNKNOWN. (If detection results change RED or AMBER to UNKNOWN, this module estimates pedestrian's traffic signal as UNKNOWN.) If this parameter is `false`, this module use only latest detection results for estimation. (Only when the detection result is GREEN/AMBER, this module estimates pedestrian's traffic signal as RED.) | `true` | -| `last_detect_color_hold_time` | `double` | The time threshold to hold for last detected vehicle traffic light color. The unit is second. | `2.0` | -| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedestrian traffic light color. The unit is second. | +| `last_detect_color_hold_time` | `double` | The time threshold to hold for last detected vehicle traffic light color. The unit is second. | `2.0` | +| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedestrian traffic light color. The unit is second. | ## Inner-workings / Algorithms diff --git a/perception/autoware_traffic_light_classifier/README.md b/perception/autoware_traffic_light_classifier/README.md index cd0b4ec4a9d21..1e97c89433b41 100644 --- a/perception/autoware_traffic_light_classifier/README.md +++ b/perception/autoware_traffic_light_classifier/README.md @@ -59,12 +59,12 @@ These colors and shapes are assigned to the message as follows: ### Node Parameters -| Name | Type | Description | -| ----------------------------- | ----- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `classifier_type` | int | if the value is `1`, cnn_classifier is used | -| `data_path` | str | packages data and artifacts directory path | -| `backlight_threshold` | float | If the intensity of light is grater than this threshold, the class of the corresponding RoI will be overwritten with UNKNOWN, and the confidence of the overwritten signal will be set to `0.0`. The value should be set in the range of `[0.0, 1.0]`. If you wouldn't like to use this feature, please set it to `1.0`. | -| `classify_traffic_light_type` | int | if the value is `0`, the classifier classifies the vehicular signals. if the value is `1`, it classifies the pedestrian signals. | +| Name | Type | Description | +| ----------------------------- | ----- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `classifier_type` | int | if the value is `1`, cnn_classifier is used | +| `data_path` | str | packages data and artifacts directory path | +| `backlight_threshold` | float | If the intensity of light is grater than this threshold, the class of the corresponding RoI will be overwritten with UNKNOWN, and the confidence of the overwritten signal will be set to `0.0`. The value should be set in the range of `[0.0, 1.0]`. If you wouldn't like to use this feature, please set it to `1.0`. | +| `classify_traffic_light_type` | int | if the value is `0`, the classifier classifies the vehicular signals. if the value is `1`, it classifies the pedestrian signals. | ### Core Parameters From 84013dd3c419e1d8b5cd0e79b676f8d869a2e493 Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Tue, 2 Apr 2024 11:37:09 +0900 Subject: [PATCH 16/37] doc: update README for pedestrian traffic light recognition Signed-off-by: tzhong518 --- .../README.md | 33 ++++++++++++++++--- .../README.md | 14 ++++++-- .../README.md | 4 ++- ...traffic_light_multi_camera_fusion_node.cpp | 3 ++ .../README.md | 23 +++++++------ 5 files changed, 58 insertions(+), 19 deletions(-) diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index b14fefbd43beb..9d28d1f84a17e 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -2,7 +2,7 @@ ## Purpose -`crosswalk_traffic_light_estimator` is a module that estimates pedestrian traffic signals from HDMap and detected vehicle traffic signals. +`crosswalk_traffic_light_estimator` is a module that estimates pedestrian traffic signals from HDMap and detected traffic signals. ## Inputs / Outputs @@ -25,10 +25,13 @@ | Name | Type | Description | Default value | | :---------------------------- | :------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------ | | `use_last_detect_color` | `bool` | If this parameter is `true`, this module estimates pedestrian's traffic signal as RED not only when vehicle's traffic signal is detected as GREEN/AMBER but also when detection results change GREEN/AMBER to UNKNOWN. (If detection results change RED or AMBER to UNKNOWN, this module estimates pedestrian's traffic signal as UNKNOWN.) If this parameter is `false`, this module use only latest detection results for estimation. (Only when the detection result is GREEN/AMBER, this module estimates pedestrian's traffic signal as RED.) | `true` | -| `last_detect_color_hold_time` | `double` | The time threshold to hold for last detect color. | `2.0` | +| `last_detect_color_hold_time` | `double` | The time threshold to hold for last detected vehicle traffic light color. | `2.0` | +| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedetrian traffic light color. ## Inner-workings / Algorithms - +1. Estimate the color of pedestrian traffic light from HDMap and detected vehicle traffic signals. +2. If pedestrian traffic light recognition is available, determine the final state based on classification result and estimation result. +### Estimation ```plantuml start @@ -58,7 +61,7 @@ end If traffic between pedestrians and vehicles is controlled by traffic signals, the crosswalk traffic signal maybe **RED** in order to prevent pedestrian from crossing when the following conditions are satisfied. -### Situation1 +#### Situation1 - crosswalk conflicts **STRAIGHT** lanelet - the lanelet refers **GREEN** or **AMBER** traffic signal (The following pictures show only **GREEN** case) @@ -70,7 +73,7 @@ If traffic between pedestrians and vehicles is controlled by traffic signals, th -### Situation2 +#### Situation2 - crosswalk conflicts different turn direction lanelets (STRAIGHT and LEFT, LEFT and RIGHT, RIGHT and STRAIGHT) - the lanelets refer **GREEN** or **AMBER** traffic signal (The following pictures show only **GREEN** case) @@ -79,6 +82,26 @@ If traffic between pedestrians and vehicles is controlled by traffic signals, th +### Final state +```plantumul +start +if (the pedestrian traffic light classification result exists)then + : update the flashing flag according to the classification result(in_signal) and last_signals + if (the traffic light is flashing?)then(yes) + : update the traffic light state + else(no) + : the traffic light state is the same with the classification result +if (the classification result not exists) + : the traffic light state is the same with the estimation + : output the current traffic light state +end +``` + +#### Update flashing flag +
+ +
+ ## Assumptions / Known limits ## Future extensions / Unimplemented parts diff --git a/perception/autoware_traffic_light_classifier/README.md b/perception/autoware_traffic_light_classifier/README.md index 6e720aabc7593..3782c5944245e 100644 --- a/perception/autoware_traffic_light_classifier/README.md +++ b/perception/autoware_traffic_light_classifier/README.md @@ -8,15 +8,22 @@ traffic_light_classifier is a package for classifying traffic light labels using ### cnn_classifier -Traffic light labels are classified by EfficientNet-b1 or MobileNet-v2. -Totally 83400 (58600 for training, 14800 for evaluation and 10000 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. +Traffic light labels are classified by EfficientNet-b1 or MobileNet-v2. +We trained classifiers for vehicular signals and pedestrian signals separately. +For vehicular signals, totally 83400 (58600 for training, 14800 for evaluation and 10000 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. The information of the models is listed here: - | Name | Input Size | Test Accuracy | | --------------- | ---------- | ------------- | | EfficientNet-b1 | 128 x 128 | 99.76% | | MobileNet-v2 | 224 x 224 | 99.81% | +For pedestrian signals, totally 21199 (17860 for training, 2114 for evaluation and 1225 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. +The information of the models is listed here: +| Name | Input Size | Test Accuracy | +| --------------- | ---------- | ------------- | +| EfficientNet-b1 | 128 x 128 | 97.89% | +| MobileNet-v2 | 224 x 224 | 99.10% | + ### hsv_classifier Traffic light colors (green, yellow and red) are classified in HSV model. @@ -57,6 +64,7 @@ These colors and shapes are assigned to the message as follows: | `classifier_type` | int | if the value is `1`, cnn_classifier is used | | `data_path` | str | packages data and artifacts directory path | | `backlight_threshold` | float | If the intensity get grater than this overwrite with UNKNOWN in corresponding RoI. Note that, if the value is much higher, the node only overwrites in the harsher backlight situations. Therefore, If you wouldn't like to use this feature set this value to `1.0`. The value can be `[0.0, 1.0]`. The confidence of overwritten signal is set to `0.0`. | +| `classify_traffic_light_type` | int | if the value is `0`, the classifier classifies the vehicular signals. if the value is `1`, it classifies the pedestrian signals. | ### Core Parameters diff --git a/perception/autoware_traffic_light_map_based_detector/README.md b/perception/autoware_traffic_light_map_based_detector/README.md index 8a59db19ae64d..b6ba14df262be 100644 --- a/perception/autoware_traffic_light_map_based_detector/README.md +++ b/perception/autoware_traffic_light_map_based_detector/README.md @@ -9,7 +9,7 @@ Calibration and vibration errors can be entered as parameters, and the size of t ![traffic_light_map_based_detector_result](./docs/traffic_light_map_based_detector_result.svg) If the node receives route information, it only looks at traffic lights on that route. -If the node receives no route information, it looks at a radius of 200 meters and the angle between the traffic light and the camera is less than 40 degrees. +If the node receives no route information, it looks at a radius of max_detection_range and the angle between the traffic light and the camera is less than traffic_light_max_angle_range. ## Input topics @@ -37,6 +37,8 @@ If the node receives no route information, it looks at a radius of 200 meters an | `max_vibration_width` | double | Maximum error in width direction. If -5~+5, it will be 10. | | `max_vibration_depth` | double | Maximum error in depth direction. If -5~+5, it will be 10. | | `max_detection_range` | double | Maximum detection range in meters. Must be positive | +| `car_traffic_light_max_angle_range` | double | Maximum angle between the vehicular traffic light and the camera in degrees. Must be positive +| `pedestrian_traffic_light_max_angle_range` | double | Maximum angle between the pedestrian traffic light and the camera in degrees. Must be positive | | `min_timestamp_offset` | double | Minimum timestamp offset when searching for corresponding tf | | `max_timestamp_offset` | double | Maximum timestamp offset when searching for corresponding tf | | `timestamp_sample_len` | double | sampling length between min_timestamp_offset and max_timestamp_offset | diff --git a/perception/autoware_traffic_light_multi_camera_fusion/src/traffic_light_multi_camera_fusion_node.cpp b/perception/autoware_traffic_light_multi_camera_fusion/src/traffic_light_multi_camera_fusion_node.cpp index 08c6600d91923..67d6102545e47 100644 --- a/perception/autoware_traffic_light_multi_camera_fusion/src/traffic_light_multi_camera_fusion_node.cpp +++ b/perception/autoware_traffic_light_multi_camera_fusion/src/traffic_light_multi_camera_fusion_node.cpp @@ -85,6 +85,9 @@ int compareRecord( int visible_score_1 = calVisibleScore(r1); int visible_score_2 = calVisibleScore(r2); if (visible_score_1 == visible_score_2) { + /* + if the visible scores are the same, the one with higher confidence is of higher priority + */ double confidence_1 = r1.signal.elements[0].confidence; double confidence_2 = r2.signal.elements[0].confidence; return confidence_1 < confidence_2 ? -1 : 1; diff --git a/perception/autoware_traffic_light_occlusion_predictor/README.md b/perception/autoware_traffic_light_occlusion_predictor/README.md index bc57dbea76c97..ce2a103d4715a 100644 --- a/perception/autoware_traffic_light_occlusion_predictor/README.md +++ b/perception/autoware_traffic_light_occlusion_predictor/README.md @@ -8,22 +8,24 @@ For each traffic light roi, hundreds of pixels would be selected and projected i ![image](images/occlusion.png) -If no point cloud is received or all point clouds have very large stamp difference with the camera image, the occlusion ratio of each roi would be set as 0. +If no point cloud is received or all point clouds have very large stamp difference with the camera image, the occlusion ratio of each roi would be set as 0. The signal whose occlusion ratio is larger than max_occlusion_ratio will be set as unknown type. ## Input topics -| Name | Type | Description | -| -------------------- | ---------------------------------------------- | ------------------------ | -| `~input/vector_map` | autoware_map_msgs::msg::LaneletMapBin | vector map | -| `~/input/rois` | autoware_perception_msgs::TrafficLightRoiArray | traffic light detections | -| `~input/camera_info` | sensor_msgs::CameraInfo | target camera parameter | -| `~/input/cloud` | sensor_msgs::PointCloud2 | LiDAR point cloud | +| Name | Type | Description | +| -------------------- | --------------------------------------------------- | ------------------------ | +| `~input/vector_map` | autoware_auto_mapping_msgs::HADMapBin | vector map | +| `~/input/car/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | vehicular traffic light signals | +| `~/input/pedestrian/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | pedestrian traffic light signals | +| `~/input/rois` | autoware_auto_perception_msgs::TrafficLightRoiArray | traffic light detections | +| `~input/camera_info` | sensor_msgs::CameraInfo | target camera parameter | +| `~/input/cloud` | sensor_msgs::PointCloud2 | LiDAR point cloud | ## Output topics -| Name | Type | Description | -| -------------------- | ---------------------------------------------------- | ---------------------------- | -| `~/output/occlusion` | autoware_perception_msgs::TrafficLightOcclusionArray | occlusion ratios of each roi | +| Name | Type | Description | +| -------------------- | --------------------------------------------------------- | ---------------------------- | +| `~/output/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | traffic light signals reset according to the occlusion ratio | ## Node parameters @@ -34,3 +36,4 @@ If no point cloud is received or all point clouds have very large stamp differen | `max_valid_pt_dist` | double | The points within this distance would be used for calculation | | `max_image_cloud_delay` | double | The maximum delay between LiDAR point cloud and camera image | | `max_wait_t` | double | The maximum time waiting for the LiDAR point cloud | +| `max_occlusion_ratio` | int | The maximum occlusion ratio for setting signal as unknown | From f8f2da577b1ad12e6447ec7354c5296944dbef2e Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Tue, 2 Apr 2024 11:48:20 +0900 Subject: [PATCH 17/37] fix: precommit Signed-off-by: tzhong518 --- .../README.md | 11 +++++--- .../README.md | 24 ++++++++--------- .../README.md | 26 +++++++++---------- .../README.md | 22 ++++++++-------- 4 files changed, 44 insertions(+), 39 deletions(-) diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index 9d28d1f84a17e..8dafabea7e844 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -25,13 +25,16 @@ | Name | Type | Description | Default value | | :---------------------------- | :------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------ | | `use_last_detect_color` | `bool` | If this parameter is `true`, this module estimates pedestrian's traffic signal as RED not only when vehicle's traffic signal is detected as GREEN/AMBER but also when detection results change GREEN/AMBER to UNKNOWN. (If detection results change RED or AMBER to UNKNOWN, this module estimates pedestrian's traffic signal as UNKNOWN.) If this parameter is `false`, this module use only latest detection results for estimation. (Only when the detection result is GREEN/AMBER, this module estimates pedestrian's traffic signal as RED.) | `true` | -| `last_detect_color_hold_time` | `double` | The time threshold to hold for last detected vehicle traffic light color. | `2.0` | -| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedetrian traffic light color. +| `last_detect_color_hold_time` | `double` | The time threshold to hold for last detected vehicle traffic light color. | `2.0` | +| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedetrian traffic light color. | ## Inner-workings / Algorithms + 1. Estimate the color of pedestrian traffic light from HDMap and detected vehicle traffic signals. 2. If pedestrian traffic light recognition is available, determine the final state based on classification result and estimation result. + ### Estimation + ```plantuml start @@ -83,13 +86,14 @@ If traffic between pedestrians and vehicles is controlled by traffic signals, th ### Final state + ```plantumul start if (the pedestrian traffic light classification result exists)then : update the flashing flag according to the classification result(in_signal) and last_signals if (the traffic light is flashing?)then(yes) : update the traffic light state - else(no) + else(no) : the traffic light state is the same with the classification result if (the classification result not exists) : the traffic light state is the same with the estimation @@ -98,6 +102,7 @@ end ``` #### Update flashing flag +
diff --git a/perception/autoware_traffic_light_classifier/README.md b/perception/autoware_traffic_light_classifier/README.md index 3782c5944245e..2315ac17364a3 100644 --- a/perception/autoware_traffic_light_classifier/README.md +++ b/perception/autoware_traffic_light_classifier/README.md @@ -12,17 +12,17 @@ Traffic light labels are classified by EfficientNet-b1 or MobileNet-v2. We trained classifiers for vehicular signals and pedestrian signals separately. For vehicular signals, totally 83400 (58600 for training, 14800 for evaluation and 10000 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. The information of the models is listed here: -| Name | Input Size | Test Accuracy | +| Name | Input Size | Test Accuracy | | --------------- | ---------- | ------------- | -| EfficientNet-b1 | 128 x 128 | 99.76% | -| MobileNet-v2 | 224 x 224 | 99.81% | +| EfficientNet-b1 | 128 x 128 | 99.76% | +| MobileNet-v2 | 224 x 224 | 99.81% | For pedestrian signals, totally 21199 (17860 for training, 2114 for evaluation and 1225 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. The information of the models is listed here: -| Name | Input Size | Test Accuracy | +| Name | Input Size | Test Accuracy | | --------------- | ---------- | ------------- | -| EfficientNet-b1 | 128 x 128 | 97.89% | -| MobileNet-v2 | 224 x 224 | 99.10% | +| EfficientNet-b1 | 128 x 128 | 97.89% | +| MobileNet-v2 | 224 x 224 | 99.10% | ### hsv_classifier @@ -59,12 +59,12 @@ These colors and shapes are assigned to the message as follows: ### Node Parameters -| Name | Type | Description | -| --------------------- | ----- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `classifier_type` | int | if the value is `1`, cnn_classifier is used | -| `data_path` | str | packages data and artifacts directory path | -| `backlight_threshold` | float | If the intensity get grater than this overwrite with UNKNOWN in corresponding RoI. Note that, if the value is much higher, the node only overwrites in the harsher backlight situations. Therefore, If you wouldn't like to use this feature set this value to `1.0`. The value can be `[0.0, 1.0]`. The confidence of overwritten signal is set to `0.0`. | -| `classify_traffic_light_type` | int | if the value is `0`, the classifier classifies the vehicular signals. if the value is `1`, it classifies the pedestrian signals. | +| Name | Type | Description | +| ----------------------------- | ----- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | +| `classifier_type` | int | if the value is `1`, cnn_classifier is used | +| `data_path` | str | packages data and artifacts directory path | +| `backlight_threshold` | float | If the intensity get grater than this overwrite with UNKNOWN in corresponding RoI. Note that, if the value is much higher, the node only overwrites in the harsher backlight situations. Therefore, If you wouldn't like to use this feature set this value to `1.0`. The value can be `[0.0, 1.0]`. The confidence of overwritten signal is set to `0.0`. | +| `classify_traffic_light_type` | int | if the value is `0`, the classifier classifies the vehicular signals. if the value is `1`, it classifies the pedestrian signals. | ### Core Parameters diff --git a/perception/autoware_traffic_light_map_based_detector/README.md b/perception/autoware_traffic_light_map_based_detector/README.md index b6ba14df262be..6a8e7bb476f03 100644 --- a/perception/autoware_traffic_light_map_based_detector/README.md +++ b/perception/autoware_traffic_light_map_based_detector/README.md @@ -29,16 +29,16 @@ If the node receives no route information, it looks at a radius of max_detection ## Node parameters -| Parameter | Type | Description | -| ---------------------- | ------ | --------------------------------------------------------------------- | -| `max_vibration_pitch` | double | Maximum error in pitch direction. If -5~+5, it will be 10. | -| `max_vibration_yaw` | double | Maximum error in yaw direction. If -5~+5, it will be 10. | -| `max_vibration_height` | double | Maximum error in height direction. If -5~+5, it will be 10. | -| `max_vibration_width` | double | Maximum error in width direction. If -5~+5, it will be 10. | -| `max_vibration_depth` | double | Maximum error in depth direction. If -5~+5, it will be 10. | -| `max_detection_range` | double | Maximum detection range in meters. Must be positive | -| `car_traffic_light_max_angle_range` | double | Maximum angle between the vehicular traffic light and the camera in degrees. Must be positive -| `pedestrian_traffic_light_max_angle_range` | double | Maximum angle between the pedestrian traffic light and the camera in degrees. Must be positive | -| `min_timestamp_offset` | double | Minimum timestamp offset when searching for corresponding tf | -| `max_timestamp_offset` | double | Maximum timestamp offset when searching for corresponding tf | -| `timestamp_sample_len` | double | sampling length between min_timestamp_offset and max_timestamp_offset | +| Parameter | Type | Description | +| ------------------------------------------ | ------ | ---------------------------------------------------------------------------------------------- | +| `max_vibration_pitch` | double | Maximum error in pitch direction. If -5~+5, it will be 10. | +| `max_vibration_yaw` | double | Maximum error in yaw direction. If -5~+5, it will be 10. | +| `max_vibration_height` | double | Maximum error in height direction. If -5~+5, it will be 10. | +| `max_vibration_width` | double | Maximum error in width direction. If -5~+5, it will be 10. | +| `max_vibration_depth` | double | Maximum error in depth direction. If -5~+5, it will be 10. | +| `max_detection_range` | double | Maximum detection range in meters. Must be positive | +| `car_traffic_light_max_angle_range` | double | Maximum angle between the vehicular traffic light and the camera in degrees. Must be positive | +| `pedestrian_traffic_light_max_angle_range` | double | Maximum angle between the pedestrian traffic light and the camera in degrees. Must be positive | +| `min_timestamp_offset` | double | Minimum timestamp offset when searching for corresponding tf | +| `max_timestamp_offset` | double | Maximum timestamp offset when searching for corresponding tf | +| `timestamp_sample_len` | double | sampling length between min_timestamp_offset and max_timestamp_offset | diff --git a/perception/autoware_traffic_light_occlusion_predictor/README.md b/perception/autoware_traffic_light_occlusion_predictor/README.md index ce2a103d4715a..0a51de3dc2e3a 100644 --- a/perception/autoware_traffic_light_occlusion_predictor/README.md +++ b/perception/autoware_traffic_light_occlusion_predictor/README.md @@ -12,19 +12,19 @@ If no point cloud is received or all point clouds have very large stamp differen ## Input topics -| Name | Type | Description | -| -------------------- | --------------------------------------------------- | ------------------------ | -| `~input/vector_map` | autoware_auto_mapping_msgs::HADMapBin | vector map | -| `~/input/car/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | vehicular traffic light signals | -| `~/input/pedestrian/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | pedestrian traffic light signals | -| `~/input/rois` | autoware_auto_perception_msgs::TrafficLightRoiArray | traffic light detections | -| `~input/camera_info` | sensor_msgs::CameraInfo | target camera parameter | -| `~/input/cloud` | sensor_msgs::PointCloud2 | LiDAR point cloud | +| Name | Type | Description | +| ------------------------------------ | --------------------------------------------------- | -------------------------------- | +| `~input/vector_map` | autoware_auto_mapping_msgs::HADMapBin | vector map | +| `~/input/car/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | vehicular traffic light signals | +| `~/input/pedestrian/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | pedestrian traffic light signals | +| `~/input/rois` | autoware_auto_perception_msgs::TrafficLightRoiArray | traffic light detections | +| `~input/camera_info` | sensor_msgs::CameraInfo | target camera parameter | +| `~/input/cloud` | sensor_msgs::PointCloud2 | LiDAR point cloud | ## Output topics -| Name | Type | Description | -| -------------------- | --------------------------------------------------------- | ---------------------------- | +| Name | Type | Description | +| -------------------------- | --------------------------------------------- | ------------------------------------------------------------ | | `~/output/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | traffic light signals reset according to the occlusion ratio | ## Node parameters @@ -36,4 +36,4 @@ If no point cloud is received or all point clouds have very large stamp differen | `max_valid_pt_dist` | double | The points within this distance would be used for calculation | | `max_image_cloud_delay` | double | The maximum delay between LiDAR point cloud and camera image | | `max_wait_t` | double | The maximum time waiting for the LiDAR point cloud | -| `max_occlusion_ratio` | int | The maximum occlusion ratio for setting signal as unknown | +| `max_occlusion_ratio` | int | The maximum occlusion ratio for setting signal as unknown | From 2d72c1b15936a710933990b6ec1ad987d3eca713 Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Tue, 2 Apr 2024 12:01:53 +0900 Subject: [PATCH 18/37] fix: spell check Signed-off-by: tzhong518 --- perception/autoware_crosswalk_traffic_light_estimator/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index 8dafabea7e844..9bc071b26f074 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -26,7 +26,7 @@ | :---------------------------- | :------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------ | | `use_last_detect_color` | `bool` | If this parameter is `true`, this module estimates pedestrian's traffic signal as RED not only when vehicle's traffic signal is detected as GREEN/AMBER but also when detection results change GREEN/AMBER to UNKNOWN. (If detection results change RED or AMBER to UNKNOWN, this module estimates pedestrian's traffic signal as UNKNOWN.) If this parameter is `false`, this module use only latest detection results for estimation. (Only when the detection result is GREEN/AMBER, this module estimates pedestrian's traffic signal as RED.) | `true` | | `last_detect_color_hold_time` | `double` | The time threshold to hold for last detected vehicle traffic light color. | `2.0` | -| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedetrian traffic light color. | +| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedestrian traffic light color. | ## Inner-workings / Algorithms From 1d37b17f2bdfc8efd746675eb179f3d193bbe797 Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Wed, 3 Apr 2024 14:51:54 +0900 Subject: [PATCH 19/37] fix: add image Signed-off-by: tzhong518 --- .../images/flashing_state.png | Bin 0 -> 24574 bytes 1 file changed, 0 insertions(+), 0 deletions(-) create mode 100644 perception/autoware_crosswalk_traffic_light_estimator/images/flashing_state.png diff --git a/perception/autoware_crosswalk_traffic_light_estimator/images/flashing_state.png b/perception/autoware_crosswalk_traffic_light_estimator/images/flashing_state.png new file mode 100644 index 0000000000000000000000000000000000000000..7686f3842e75c92cfd27f9de08ef1ecf6eb3651f GIT binary patch literal 24574 zcmd43WmFtdw>C&ZfB?bW-Q696LvVL@cemg!!L^a#8r&hcySr;+jXTqM@4espW@gR5 znYE^Vbam}Ib!4A?PVN2dr#n(nUJ?Ng7Y+gf0zq0zOc?^=GXVU(<0~}yb0)Sr9DMrX zA|kE&6@2)9HH`pYzlPMH*n2k&=^$+o8Nu z`gs_lbZBi*%w$xD#JbCDw_?ty_x$4nk4jC7c2_K^F&xwO=1+M`&$mZXXp+Ofd%7lE z66Jjg`g^q|SOYw5bN2N21^8U}9I`$52|citc6D_Tl7t|Od?oQ--tI#D{`KF*E4$P# z&3_3)$pPSdA;<+9{~wFEJZnx*dPBxi9jl)?~=@vhx9(%{-=$ z3hh6)A=frMw&YAr3q@bt%ZJd2Sy*QB(P01eJtB~SkFQ?mAjg3|8j#rUI-nD{2bPDQ;ez+IyTNqovHi)AB zX%jlE>k`zUR(t>P8XUtGcmDncg6IFZ_rR|NZvNrj7olLPe2Z%|&@F7aMTN1`z+PQ- zKPn$tb_zV3)9|KF@Q(7xHkpx3IpeLq)&Nr2mjAeqgSUMhD+2Hob*7$ubo^ZvY21+O z<}1D$ln)I~5kULzRs}CX{5L{ympau~m7+cdJKKTqXYcmpN*nkZ;jf)x!B{)*UP4rK zx6qybgk)0bm*Wij0x3rZyLkt!X}XfvJvO{S;gDE#TJxe7z)GExIOBz{EmMs(WGLV< zi5Z;U^t%wNEYQufrh^F2(f5vb#02;|jy{(`s)gp8ggHtg0S1+-0nNz3l_jPBm`%-{ zY?%Hvfpg7)DbwPoBlR!iTat{ay7OCz&3zk>Z?Am!{(mc6MN)u`c27$_fy(R=YoX$ zh#8|(=#+al#rj%v8>;sXgR@1;p`=W_wuYNEAo;-P$c3zyDxhU^wwk`+kkYx3vPp zW|SSa{daM+2GEV^H$`->=F+b|#e9zZ`GRQTdU_|uF5AQBPzH|)S zAtscoer(G0u@%pjR24NxAUD=wSS!jAU*QT*mylCnY_9q*k3snfWQ*dL*uzk1T8Fb{ zwijuS0)Ad4{eK33%qEcLsjddHZj03u4MTbPYiS&=w-ZwoRk^{5m`l{(wT#l@)8aU# z$v1Ft7ei3f<59rY2QjHnd{G6Dk5f(Z?9ZS?mVdpb?N0C>m{J0q?l|nhU+R%fXGX74 zPEkqAZV|27nbg4xuba}}<~qR|78RX;(=L$c$*3v!{OGI4=RfLciZJ4m5(GX0hb$sj zs(M>?`orS(7suSM5jAR^&&A(6Eb$%Q<_rhpmYBs`dxdobE`;W`eJarQ^j2x%`39)e zF4C2@cIvEzcjR@&kT6lNKS-60t)Z>i827P$jtL7M7{vV*T5L0j~*Jz1eB}29Nm-v2CBic zem{RQt!${?JC-@jv-vSm<(y!NeJhY)}XaSniDk2S0TUMz-7VYAgO{ zT5JSTJr3HF_O=jRe$L%{XUONXeyl76@O-d`Y!9uM^1ng)s39ob3_5qC{WYs= zQcLg+jkS;{cv) zwGVZPMWQJ-pD4Z_r1*neiPzxD8ej_$%f7==Vdzr*QgOxgmhsK0If2=M`cz}#)vezV zbHYK2uyuL2IF~o2%E#tW-vge9X+7$4Vf@}m<6?yR;Hg`G-|@hbvSsn+sFzkA0e7-Z zjKBrAIXbf}kZX)bT{#$+`Fg1TaTa-3J&tVWbmO^Rmv~@sb2mEMTegqDZa~i?R7Jy{ z1el`J+U?~1eSL$oms7R&W2&*k^@zE&g5*Z1f}6vdoTzdhWqp~fo%&B=NWcrA)2}f( z@ZV01KkMSrXal`Big7JiL3}AofMAt?rUdJKt?$|{Y|Ig{Jq$VvSP5=%nn>a~x1`c7 zJyN#wTRUQ?G5+AXi@75h@Qp8i;X}b^Pi$`;w-~V7iu4=w{Y7@)6A~KRmg@SM+4)N} z(JXVrDQSD$*7hsGj^m1JU{Y8J2HJ9W|M^fM+E3vaH@i~kltzS)MeA(b3+=5N$bimi z&#FGjYxQ!#MlD&^-5_G89r=u0S_H0O%*#+;5iMWJc_q7Jb1ty45be85JG=*9@U`e( zT+zZA1I!jo!&0I=gll#G)5Bj>rT-X}lEC6#S*s8G*k##HN)6MWV<7xDH-g?b4+M*Q zp;+>9$vy-JM?%36zUtC_KmWjPU!M%Nrm<{ZkH|#$yM}h|X(2|Fvf$C*N0Hv=zBk=^ zt-Tq%f6c?yVd{ZAD()FF@}gI`0e(n_%aNi^k_Ryph}?ij*D~9-?|Zx-Tf;NE+V1&Fkj^#^I1zys<^`o&Pn~Jy9fUIvP9z-V#;;iWWGTplE)q$ z&z`c(BaeopNnI-``Hdl)ua8f61Z|%@g2x{5blpv0wx$=af0PI#0A2M@cD=LpsPTsM z@*npCu4;%Xe1$BTJzjmkdc+eAuH*ck$zilGQ%aJPjNh-);1DA!+f}&{zmBUodv{3L z_|OYev8mFy7O{|8t@YPFC7iE536Lq=e8^lS7oBlZKsdfd-c_ zOfhY=j+TBbDr@u&3aKmen_^d}dn2!zw&*dmN!j^Oxw)AKq0PU)V(MoKRSwhw%T#;Z zJaukZV0HvYBlF$dCzBz6>*y+EN&}ioSUS`ph?#AIk9^f;O$l) z8J;{aa8{SpEy8a7oOy(fIixs!dN-x8X-ZCdD}g_IK5#VlFD>B{{x!U9<19du7AOgkC_r&+364n)$=Z8W{ZxEE)|EYCA5aeAt>W^ zoI~|-Q&BdpU*;FNPGIrA>Fx}X?;rJjbqrMrcna)@o=|W)ys77ZeC6uD=w3Hp_mCwR z{BpV+@F(*SX|;7+62_*~6?fE6PxouroY@?~yHli!o8!>@FS$hd?^PsI&QL9%H6R#o zda|~*UnF>7YGO^1jL&ka$1`@n+&Pw=?*jD!WM3DBK|4PXIsKR_%7(d-z@BQ)`i0ZX`=CwHoWAQt+w)KFDa48zK_pnn9AA)K?_qad?u0h?jHzCcsxdp(jdV(c*IC46B*u@M(xgs5Rd+RC4+Rj2tPZ7^p0`O=jUoG= z2&KBoWAJ}$<^8g7_!mAv4(2^SKO=7RIGruCZ2DXZs8?w(jHt}3She?OL~j=m^b-8m z0Zvqu)Xh#(SXd0|X9Z|YJsdGAx#O&Fu*Y=HWcRkwsj4sbG03-Y%82>%Hv`l2WgX8S zwX-0Wf!rqbE>>5D{{BE@_KmJ_CCSm^tc3-q;&FM3;EX)H*@6yjd!JA?v$V2GNRQ@x zxy`Ii=g`p~j&7Y@x>3zKk+16+YF2xZ=aFK1|D5Zkk;zH(@pKk32?^eY0H#|kgkH6h zos0V>Cp(^3frwhdOkmonRqULhu8y1_uk!_-qq}i$3qZH9BO1ihTZXo@KYvv^E@;v{ zT*0(9+zwP1{8-ZQ*ERpqIb33}6u9>F0$A2q@}D(xZkksoRIV~I);?V3j%f(<{kPFY za35c49uoUxUk2YscHO1^A~v4u^3A2t`IFO zftG3*uhZh_v3OBG`?F-%a+!VROy4dU{N)CO5~Qv)pNpi^#pO@uv&ZWxr;%QjK{PiOZ3<5(l3Z|s6DXw#$K24jrQ=< zCA8bj``!nwr1S7{a&T+OSOJpBKIVg^_Mq;*avZBNek==tQTc1GwaG&(#X4!7}M0>d1tBC^_ZQ7N1HV-a3hBH{8sImfZx&?_#@;ekv|7b zBobQ6zCvLDtJyIH*B!rYO@|f|S$dH|-87+NhH+l-3AB&^+`|kQvi2g=b35!syp3h^)p`9v&VnTJQnAxI+#Yw2ec}(qo6kgGz+r|nnWvC<{)$K-x zn%am&L~F@ss}E%6${l9S><6?b_1OeXLs*yLs0e;h=en?HPC&tv7{%IV;|d$-#uZ=5 zw^c51|LxY4|3yLx-jWy1MH+f0zF2V^Q_S`j)p-4TZTj$Ls<5#U4P!BbnEURo_FyHU zywPM0o07#KY7%xdpBegvZ|?1ptR{1}BFo;Sj2nQt9u39_DU zPnX}wBR8Zqij~4RNvEp9V-gE}w`jDsIi-gW=gAaiz9mvUujbsoUju4qocZG?2mNO)q1$qDGyXGlqbTtD18AcmerXjSZ{;zW;5pof|8`3@A!x2U{sKuz<9dw9(Llxg&c(spsr{z~^ zlMr1R7*Y?BKP>qr2C}Z8rcgPWf`0pOq7oGK9CQRdh;qc@(WWCE7&FqDcvvya9ZI0S zBm|i$aE9$$$}bC9Y@uFGrbGuX5%*7gM^ymLDOnHr$q}w$>Au0m^}IwvBnq3@Um|p9X5=xC)s%d2$kpE+UFD}3i<7_>6fG1f_ zt0EGA23x!VDTLC9kvEE$sql_zpuTnAw|G0L1t7BmhtjO(}3IfdeUN+%&tqIeRc_JgG(e>pKsP;D;wZ|^m*+jq2;xZp# z=(p>yk)WAdFlbkz6W0B`4cq^{{avt$B|l`Uqf52ph$6|5?bluCkLd!_#1hU9f5gAl z%xC!wXFgmVAjxUgg28Z*Vfn8!Q3o{E_SlDM`@^?830r;GvoZOB8jt{~V4Ri0mdxv> z?zNrMG2h+TsRXb75%0YGDs>W~WJP8>!34H>e~`yA|&qC-By1h2*|8_T`d%U z*ATX^a9?X{z4wAF6#NLY%Yy4tOAi`sM?07m0MG4Ta?$hM5)-knEH|0=FUJ`>6)-+f zqfe8BV^9q0&RiA=KI@Cz_?2Xbea2Bj()N2YB0aYte{{RA(EyNrVx<-6MGjw>Onaqh zBb_6p!+iG-P4s|D<|Xf_UO0SX=|W>pM+@cu%!o~#`5S~6th3eFg^}ErtuGu3jGe|2u#3ighW+!4e~LLH zrVT9d66N@Q2A$N4q~l1&Fs0wX*M3@#9)25-y3bEo+?v93T12}x^{fYYTxPIcIaZ6; zZWyL7u7n?asEzM@G{1sj0SGMDi{B6!MK)@Aq4M)Q;j7YAJkE#aV52LWc8v^{qa|c8gNcVE5Vqz$IiRaN|qhp^!x_IMD36dwl5bF?ukbu z#1Vg-$BX%B3#~s3C5BUWk@6-DC$A#x-bNO&5Rf+&Lw>F>%bO|4EmdydN}{iavMEbJ z!-P-WQ%C1pN-N6ZYr{M^+WlrJ$+5LK0`)WOlL}ty9Nats#omeGHwHwl7dr-3Et)sX zmd?iL8B}%~=*;+=l>q$L4tN%V@&OAgbeBn-k;mVWL8sWg3YS#fsxreeh~Hv)t(+b# zL78j_ef{V23bFF>K9R6}?N8a z4oW?)0f{{U*v`4T_Li3R=4E&6%+xiWWlW#n*&)YFg+nS|CAodj>hGAtJ?uz%1n~o6 zG3FLsQIBA@Kg6K9bTMny;;15i=))O(>Jeiy5S_#-qSkg0!R(w^Y;bJY0L?k;asxkH zll?AB5|PiRO6xv5bCTCy?fAgTaBLVVIUxwjxGP^BxG&a3kgUVg6ImoSkq7;R!7}FF zXj#!Bi|o#pSe~P?U00JTD>qv``?9&2EvslVuv>Yuj{hCPTG1vD#rxGCManSxL!@9& z>Tv?UIL`LvYg@R;l?QL&2|S&3ZKSWbg5pmuL8OWACFs9CurOo*k!XwvoxgKRdIkbJ zzZY5ReR)R`OL&yPY#l;pN3%e_o26W2I$q4kS#oWebo1e}5_iak{M-8GF|^aPtg}Qc zCPXX%5aE_$s2Lz*PBaVrMGrYYN6D(eKw&n8%XtxNGPjPYl~hmIsUNKE`q_rVlk2>( zqREg4UiQ=!`JQy~l`)vK#8Ews%U!)F9%+zMM)2 z-VlGz`h1JJYQE|oNDmLJ#~%uo#hX7w`N{}!?h%cu=;HSTguOEJB^wnjT0H@$9R>?@I;v|+6|e!p?6n{;c- z;m(^oa$I6lf`{75{@zog6-kx zduE+bY@J`IVZLCoi8D<`=Ga8oGDVlQnM;pZVxVr}@B68`yH&jNXBisH3wV;ziW97) z-uF(Z(&>TyxgUeSn|E;Ty_25eaa}*Lh(UY*?5w4##W)QVqx6xd^!G{1DsI>eq4&-n zkyliz`h4^!TlqVl)Z^Kq!Cez%E56!Ev5_%?FxUuwUkqI-MGttF%XJW*j!a$1kw1h9 z$asqc`Q=e(HMUYYX5d70Lr+7a?86-9oS0&WOrK5TA;t0{Kw7suJig4s(w9SDYVAxU z%9qRS>-(WAH`P@JVkv z@p{z|wv=I$*3J$DML^j^;>5Gp3xtb9r0L!~068kQ07XSUC8cvMoy=`3?Fa*;Uj?;+!D zp!2=o4cdQnSl&r;#?QrFy)q`+ebYHb25dEhcm7~xO+-;_eg9$0pJBnFSfZ%3u#m() z6UnQu>K{Z$LZy0c0;TX2~0=uR0nfkQqKZ#ZCqurz&gf)9Q{#Q6ihvLJ(l zzYwxwMw7XASXsI$N-KZnPjir80lB^YM~=z}z%_lYxs|Mts&>up>8z+uJZWk9zN-oH zR0qE-M})kSeaFmtq%p2yRKdv~hxI*=@_?AjK9DJGzU^vvpd%NCjFD0rD%V>Of~^+H z7Z_Yj{f9dw%H1d1b*N=crzop%Vx zz*~Ox1`#9%ODg?g%>3y?$>#NpuZ~UP4ir>|^a z0wIEq29xwFDXtsim#=lRG_HTskc4=C!X+{ve+9|iBW4F~;;^$3QWfS%vRYdVARhsP z%p`H8>9$JpDwa4|;udU-(_IC_`z8G{_*c??Qq9%M`sMf$4dxI1P*UO&MC)L4s}4Rd zlrh`2)XB!~EB;vRt;?W&3Xn1qx2BZbFkXHRFa|k7v**YPw9jG^bV8Fr=Lz3|4}z7e z4F9-TTIyccB^OJxoR+^G zEO|rt^v51B60&l-$~JC8{q?9C<7jtQH{b?OKro;>?bY zDzYkD%l`zV!48asTd5GpJ-5wb@;!Kw_J1djEJ21SP4_VUyiCyj#~30F}kdfZ0*!3V7#^; zt>#P9nY+cFH{_t5&2V*?(jMhrT1)6b@-P5ea$?nYRxQ!@9eLAmaJ2rHCXcv{m%=Cr6T~XsD_G8Z!TJJyBj? z_r6hL09W{&bbamo4H^>MzNeQLi}^Sk1@1L# za=NF`rixSCPG*Z{gE_yCXY-)_j2C)O_t^9*S)!{319R^?d+|btNeY5HGm=DR;BtNh zah5~l8EiFv;RX{u`d$Yq`arSMIjDUGO!%6PYdG+uRNWtMmlP-XKM8%#SW+4FM!|g@ z867@lvredKvz2fg0TDQLC1 zmZ>g&y09J)2zciQ>*Bww2&%|^VBN4bba8Rns~sZT`z1-(W5|5_`1DKdSZ_5EtZnZf@4TQ};0tbe0qVyYn|>O4#8pEIt#8llP^)=kKY$;!e5 zYdGRtD?CX*#O-00!}Zfq?oA((P(%CqLZON`uLEPe;4}5nRPN;r$%CLYKRg`VzQ(NF z=-OJ=&C!%hO$iw2WR5EMW3$Al?|nkCSYx2zi6cGdemoHgzTK+O($PI#^&vUAx!Jk3 zBMQ>P{}c{}cGK|GTg+&ANeaZ}Y9RCpC`YryDK!&B-XmnUoctJdp%)!nY24HUMi;yVKn9I4f)Vz(aSM`(rrTTp%A~^pXaI{iQ$VFYe7R zo=eThXb#ru{QNw4&44!+d&+xl>k2jYJ<1&hX^U>MNDIkiT&e8bL4T-uQJV;V(#%`m z?;O84g)$2X%wtASzeM9g$ThOnqMidp?OE#Wt`P)~F{p29+MDbTN6tQkQ z>&L8(Fe?=}_#^0Tz7;eD1qENC7-A-@U&D1s%Es9&Eqiw-A%yv7?GOIwfUdfP zds&cw6kyeP+S2Avx?DOsQ8FeM8BNJ}`n7;($NehZe)Tt#9&@^9EhqjLIsryAb9V=6 zlIOoloW6CD!%m$6BO3$e^`Il6j|`@+$Dq|!ZEKqvJ#Nu(M6>BUVL$8~u(@SLsZV{| zl2*m1>pk7=VK)$JX!e1CYaQ#0csGF(yxEQL87GL28bBoQT3EWO+pSPqzmPNM*ZNyx(R>L z6p&eqhMEbG-d#ayBi_|lWpNnW3Gv`&DiJIARBHcN{~wSDvEH`4*a?Qt!0l?<+8AX)kT;Im8!`=xVVg-qwUDhE(%kTGe zY)i!I&cq$8A8?uD5lBcVJ_&rcg2q{^IzF^B(nz|TZ?X7{bNBs=7L0Rk$8oR+T{!)6 zD@?Qtbyl4^+6uQT5!yLqzI1wYT)`x~u!=o715db%O45pjZdm-`Dd;FKmo5BupF<3) zc(=sr;ENDTqNh9LgrAa_yjvwF#(!Z(3*E5R0A23^WVDX3M@R`-YwjJ$(mHE)T!~oR z0Tl%Sx2oow-GS%{Z@y2^NQAPgs_50#)f4(eU!~1-v z*h~34u8&VH${Wi?l}_`$+t(Z}mN*6|64jV1`*%GcKb!{_g0&q(qaA%3w{-~g{O4~r zv7StD0~XoP4J4IJ)Ll)niD{zMax#|M>tH^~PZa-tv6p(#fxsD~`~Xv4c(nt>ul{+NE{M&kT+}X?n4eS!&+D)>i84-TucDtiN};sFsi5l9ax5j z|27&_g?m*nQH1J;Nr6_+McCIC`lV(rFY8o6@oFzbQUXP=ktDWJyOMfd`HweaPk^Np z#6XmFP0@V$_rKq9yU%dIT8D{^4(40TTl`FR`}jtR`~Z9>l!~Z_QIT|~Y4h)yt2ZKj zs0nzp88>J9_j)l1cyludrwBpbE%|HTb7jj;=dLAu6dON9PWdEJ`)Y8jxx_^B`xN>m z4&LG|Hx9L(Dr*DAdE~9~XEp+*^QK&%!w9C}57^H>4BR#u{A3AbCfbC=3Ey?oUBiF3 z_D+bSP6147>_G58igtZ*wr3t4kk&W!w&?`&Bg@3vIJWfE(6RHyU6MLw1O){jA0HJ} zROY&m;J_*87MkUzl7u=SeiC49q@FA6ComdPSYXVT zJpmk3H(YK=JrfrmcY63P3WavmC3{+l>9VH^1;w!XiQTN4YQ*^m!J?+gDR135bpu)8 zjahbu{k5~(r{dn?FKHO6qqRX@EC+)qhWs;NQ2)ZA$=Ww zl=%ar03lOD+FBz%n|w=2+?iTD7CuNdIE<=`vLlcIR6fH5xs?<2AQ@&Qg}LlnqSTs^ zIG+h3Ni(F%u9W{M-oYNmxIv=u1qys>jVyK|7%u1wyQn?j2d-75tvi4>HKj>^u{RlR z?*TChXH+X6ZAOHxY>!vy%{YMAz$W|dQ5Z@Sap)^(Z1@LUffgZ8Z~T6IQxfkQ)La%k ztPkWT#5XpT+X*akM zcIis!jaZtLd{-RPqnfrvo7 z-SZWdO9!O?2e5%hHgmUocW)7V%Ga_v_TOY_%|>grGIy)Ty8mdGSl8T=cZ-Xr|cc4ZGm(PXglot1UdXOARe4JZ#V(F(>woM<24@$ z_ZVt~t@{6jo^kRB)TOC-FLmR#sMGaM;3z zho!>f1iq_e2^P*20LP~=s|YX42-e+05HD%2v!wp11x1@+Q-$Ca;I>%sVEuL?foS)D zMj?9sPi6l_Ie!juCJ&GCaG_RS-DVszG#9`6cvaEQ=xN)1z zWbVj{1ch;JBRIO#@r-~9j~+yzn)q6Bt$SK%AVs^X-t2s$e%C&r5!l4X0aK=W>Lrv@ z3r%P*dtAQGz$SV|2`b&MNGSVvqs*-cBKF%;@^OSF6C4U676#tU#TIrve+uP#PCT|? zF*uqhoHdZ^nSSV|^^S3~L1iLxq_UJi5)3V5Pivt1EiH7h{oZK)t)}?uc_;w5_=mS} zRb+DX>u&SC(d-%m?>u39o`xXI&=XtLsb6aK=)s}gFJoeG{8ymRBGiz+pqF@;-7=lr){K?;y(T&3qCrgdTnNEdbj!&2xymH@VZ?3ojgm0LfJUof~ovJF-ca zRw;UCbJ!cBl5Knf-Y4;;+ts7(WKCm8uEaXV=$60Sx1EkZNLEQj&kGbZ|0iueh!w@A z$t?sJuc_ScB2Y;GFy-&0L}TUx+;*zt=P+1~jS)NfV}E+^+~rwG+osx;UNd?EdES%G z9!Vm1R+m2cKV^_lAPsudQ6-gx*IvnyrFOU$B1-7g<*S z`&ZR~t{vskLZ}%3@t7%-0_96MN+e&9cwhJ8#2wui-jFuC=+M) znX*>RXoRKF!JAFP0u9{N+~4M76z3;jNdfX?ONW0!{~YGT2>%w)b1@8;?$!dy4~n$p zClvbiY2W8SXX4OfssicNO<%!EGaFKro4H}NhJw?VE^ zlbPWB`j<}+ob%0pN>qlec$OMFZ9d8VUsLg1*)uwJJ&%VA%YLx>2_loA)G?zq=IR(q zgLjw+*#-lkpK*gCGzDr)IJoh;A=P zzeJ@3b?C|`V;EH}qTOlI=eyGl#yd;-tjdOhT$D>Z>?5QEvU{@)ED9!LC@5JlJw~b5 zp9D{)GX5AEkMR>Y#-WAFrmth_ttqTz4QH1iXb!=kv{S?Rox{32L|RO8GH|X@`HCPEsrwY!C#v-mc{D6(9oeXBwhjP!UxSbVCfq&`pPL9xs z&d|!O&d~I=J5N>g1Q3zL_|y}-e$9`aYFpA1z;cJwvtAH&6lQnZ(rd=G-< zxn?H(HDs^(hjaMJ%523`a5OJRvDtHB$QYg2Y4uO02d2mdm3eQz0wk+(`tY9X7kv#N z%&4w1VUIK-e-AlRO`U}vdlE9vm}DmQNAaOL-&5*Izh47U%dDlpM|eXyk~9B>66kyr zf^R7*=CV(O{Ql=o3vjJ;eVTt(qWsnIr`~_W%VcD1=qN`BIwm9J+LG4A?ii)jCS7Hy zO6`1T;uPhEq2Dx>c8^*W+Cu3%0KmBlTj7@k`g!(ihZNJa;NlJkjrxShd-Q4 z?)Y}dVti>y>KC6tywG?eBtLr=Wv$dRF8%KsPz;Lp2 zB|PljZ&YN1IpX9m&y9N1BDc)a{Nh3dhP6X~Zn{-qtvf7CoQA1CA6nE6SyH-rAA!LM z_lurg{L$jd1u;&>ZM^@@gKzZ*8MbG{mO<0+nJjlX2$0w#Fj3KG_w-7lvCLktToXjB z{O{TE%TwcCkdv&>v;OYGVWLB{=;iV7`HHTypvslB?o=^W4W*b#Z)vFo6k;<=&vbI;J^PB_7- zbMq}5|JX|{d?%EX@r)#i(J^R|4{TAXVRGfg%wZiUWl6+@Z&I2z z756Re3NU4S$%?FGQVqhSNb(JC{Fy>vRryqa*^Tv3`{H#OW->U0T*n_1F%c!IAft3M zC$(xT{xR-;-|5DY9~?3)Ef!LI$Hesuq5$Gkx4HQ}t8Sful%8jNtZe`xx zk~5wl%T2<~6Q0oNqvNYQ#yT_I%B-jzLUCIXdCi?N-J2Y+^(qgloaaDC>9^jK-|fJS ze~_hKH3<7v>kcV?qf9{EDuLKE61Ny{b7^IThN2f_e{CDroE#~So6NQz{k z2P{3O8Mm|Wz5IFYj^cU!Nc~86{k$DC>4%0h3{$Y z*EzYX#w;u|X-K-4TzB2=_|-yi=jQ|8XuC1F!dGt+oGs{InHxW5SMv90(gio;3t&kZ zXKGe_ia~ngyF-oCPZIZQe|^-yZn~Mv8nDNi5l&Hck7t!yh)mNwrQ?QUr%XB17^3Fa zX9e6f89dk~YflDd0U}MbY49urNEE`jWNv+Z{ib=Al-2du%4|-!G3Gv@MNuYO*;Yk8m09+u1(|@`=9uT6;fjEwkjp>1b_lZI1TZi&-0oR4{!ZcyBB>oPFclQ;$ zwq?kkFq>1N@a}|e-2amLscSq1 zdl`(Cy+uHKcsyV-SxrMUx(c%WBOb)01zBx~1YNeqjvz@Oe^^rv)~o^&N&!uVjlrA1 z=XZ$9cFM-YN_Tp5aI(MM8@;dHs*pb3Vh7FWi)>eHgq(U_u?>*mzgP+l6g=btC_hWm zn2Z#tesxWRG`24hxw;Jf7AI+GX@Q_^oTwHaXz+%`kwh=8hB)G6&AX4YXf|#vXP$w{ z1-31*%eIZ-#abVT(J*r)Cq&YQ=4L2D0I?|LEiI7im?Nw*UG|CUz^IE?=r)0TgvE_X z;e0s60Wc)R2nvL0^OFmQfSPbOTeHoH3132mm?mwrpkfs&_TH3?zN z3hFd25#-9;gT(dbz-WWGAd-?Y>%iwU>`Ye#XW7|Vb7?gv+c%^W49+sIO7K}e>WJd# z4UNHEqBvgQ(ZRtL6&)GwlXnP!agdoktmE)zJVkIixK+`#zx1BO(}mn?eS z1WYkui}1EW17f)j{Y-<-rQoA^bmjwU^dnzDrN*;K>!1q%4TfjQcHd68uXgR>RNN5N zdNX1bo)qusqVL<$Pt~?`QBEj!Vx)5ayAqzD<((`_30Gmaf0X5vfzu!36!>KWhU!{w zxg4J5V5uM%!`Uxhz5UX}u3ZROw+*PxJE{-wj*+dW{id`(tN*>KT(}BK;u!QIP8`Va z4j&2rRS}m-$1_+~Mp@-FOVpq2Yl$2p@ zmA{4bQ$`pYJ3xWsss|(MgE3Zi^mWZ;)N0zVhZNgfOo6JZw&C?&ek*X_vKA9NsUuo2 z(YD#2Bad@>w2h%MAF4GKavjeuo)cyD_3cD*GGsRw%_h9rWDhao@Il@SaI)V>4hN8p z^}nFQw+2WVkMNS6WXbk-V7TtRoj0;~B0!1f-z4E>&LZFTuJunWptyPqA$2|R%5bh5 zA{yvq3rx4$fi!gGNrXsxeLQa#Pz7t*@h1t9b)@M{q+c0$KSNUX6pP#w zG1nW+pxs44+T`)t%*4(9Ts&`As2-xG=pMUqQbphfU z(k7cm0Hs6|rPMzdS}bRHuYU`_7AB{q?Hw*U&-yy>k~o8cAjCKo2HDJ z(0bp${R$*BbU0hEI@#n2@hnF_Fg3DXrO*b*9L8~4Fd8~ z-@i|+(RX7-Ei#h4RZJ3!OB&`O;SPvRPGTkzjm2W$uWsKHf5tm{voa4uV?V@i#ru|C zPKkoOe8q5n6OH4knzh{GEj0Rwe7FuA#_-B}nCJY4$HyL;@3Is$$Gm!Jwt&h#*1ePN zlSwq!RC2&Su7m(P^u#*J5!>Acp<*;QT1|4Ck7GIihAn7ad!^hMxg;v^w9*;5?_TJy zTeU>?3Y%FbE|Bb^=aB17jko7CJ&`?g-!WPy{N+?N%H7;RgNb3So<_0paVqwc9i23>hzK^ zreHcz5U;?Fi$z1{%{=jEH^a?_pu(zFbOO#_^^Ab0iui)_>JnjVU(HvCXizOy?V-m> zIvBlWm=HafiEf-n7&$TD3~BUTZ1-IdlKmjS|LJ9MPd7j9kXOMK zw~iH*yBK!Rf)KwLw!-I5GJcg6CediFrC+NP&=%q@mK8CXTgi_4EVe*E5b^x` zUjS>%K2_XC_hTn_I~M4K*x1gbpQ?#X76?*_N*Vi{c8S8AG$Acr%qt1{9?9+=2orIu znQE(@dz@@VBJMFc<)&6>D&cKOv(;nCM~j+$Wuz%DFnK>G_<}N?{N85|d8WK3$L5Zf zrq)id*;utHn6{#3v5c3RD1&k~TytT!=>h^d{cZ<1?be0W5XIgX#RoQ#H*)(U+@Z?? z4xuuIG-@@}5vq}y2|1wLe)Xi}a?SWsi>Y!cJu72ylAgT_L&=r2Me4<(@|^ZH7G*3$ zB3f4rWn97hS?;+f;!zZ`Dk$gxY!6`9Dzw3#JMCB>Q+k#=)_ZvJudti8SXsvGmLVv13UWWL7_-G$c(~j@(W&wul(g#vuj9l$sGiNg z+X_9yw}Tw$^1UASC^$zS2=nYGzM$+I!!!mo=io6B^?ww=3+0(|m-4oR``3gwWuzQm zkRDzbPTd$qJ#g0D{0vTFip>dS`?Mf0x`r~VfA92XQQVN)QOzZ4+5ecz6JD}PJ;E=R zpi$rp=S%5N*K^v??TmcPUL3;kQ|jJo)w#Pd>iwy_c7NQVf@+$+9K8J2PC%xMKq1B| z^$+xK3{6T8Q<74fJAFdgCDuJB6$7i=zfHuHqmLkb1n;+PYKC0O2@_Q-t9iBI`_O&v zl)SvR_`u;yyQS~%py{hokuvq{n&v+iKysv_qEK6>rn+-Wh^_FPminmwqnNAwin4px zA|N@0fJ%3FcL);FEs{z%(j`L*NGe@Jr!>+G9rMyCF(Wl7-HbFd3}@coS!bQU;HM?$F!BFh&8z1tbF141|CAjyQQz>HlTha^-A;qiGQj$^AP*+EAQV^rw{M{5FuJV6PZ6Nf_TqvtxaIdESmu6sm={W59EU!TF9Yuo_kGF^yfeK zr>4Z?ZH|A-?Atr#RZ0TA6!vMFt+5T~oA`2TIexiNMa(Tx1(j|W+@}&e+bJ?MHIjWM z&j8Uk);8*VpVS$<5o%!jUF{s+>9&eEI@!-*bQmpzS*sY*-(80iL3&thz32-e}n7#tjo(KPY=hOaCy z2W>>+eUJf(PK3XFJlof&U~XZ7Vc-9J<}A$d2mV1fd%^n}ud~0B4}GtTI?H>e@DUp9 z^mL>izptOGMA}7m*NOH+d}Bvyo>{MglJ%h~jVV0ZhE#c~E$9)HknyN*q`Asz-Brer z?-?r@{#kkv%3k*`#dX|#mEQS&Rgo@CNNQ!kcd&bTa&qz)1{r2^+rXJrlYH(jTp~gP zT$x3Syn?=S4w(L!6D zi0;QhR~?33Au^3JRJP17{Q7;rG}Q6vvV{Qlh~M%(YiVh5_VB0xgFnvvr6S^dmWV(j zPvPf`6hv3}IyySr2nZu51}bLx{NsMz*o8sri_yA-h7Mnaz1kIIF|yF?Q}l9-lS? z_KolD77HGnt^0xU$gAVci~aeZp||2tC=}9vztTUgCD+bmb<7ID%l?vl#)(_y~39puJU$=E94#2>D$RDH@yt zho~tGH(K{jNcJI;gXwE~Huwu!tuUe_J;nQrD}vWLI+-?l*SThV0ipa$b#;8E4VK(c z3;}?H0XAYQ{IB2yh+KhBq-kB3{QmSCb0~W?cF3>AmNkYY?rPMU)_2Q?h2O+x(q5xk zwBBJ@GYvM32I04_<*MRITx=P!944%*Vli-XoC_I1)4>HLq#fM z2R;c5x~&qWx~;v*GeLyG`3112z@pL5bS|vzQbm80RlFF4ZDUB{_USJL{BTuYbFhnJ zCwky6vCms3dPm5L?+Fnea#h=jQRaQ!|=3*=08ySU(6( zNRx5;>^&Nsl-$$P97cbncc}Dy%0=P z4~RaKRX9)-5>ls{>9_I=%!_>ZvtXfMKJ70hVU7oVbCmitXi+@~JgS(k4U;(CQR=0b zw4c4T5RUQ}_cJqh4i4a=O90tk*VgmFEkeR^3VYul*j%NfCWPB}wZqSYoG99oCAr-i zhZW#(5(HDBEh)3rXhCjlTHU5jjy`9|KaiQ~==AV6Vu#R&=@FCn*tamw~qB*BD&Z!G*%J-_b1JF}Lr=In=2$bi&ZhphmcTeaK zFOoYeIPmX7Bu3w?KR$IOz_f1qsyx-R!Yj3$uYiIj4kz!U%q(tGZKq(w0P-8hK;{;)^a?+;*FQrj%iDnCWvXI#s|Tj!JM z~y~~LL8E?gb`ldozuZ(25IWBt!`)^zk2g)xAT!O3y4Nd zjT!=MV}#fi3r5sVD)GDj{`s{8xtWN+5@?}+s>*hy|GRe**m?{fio+ZYbLqy>I!a{KGhojGJeN_`KTG z&^L=JpOy5y5^$pnZ9!&4-_PSaj;@M$QeSY^*xU2Z^+Y_m+wXTM<{KXqK^2}`+`mvV z&)S|zO=Az3kn^R#m{D05EmqhWxrZrdWARzJTV)(B&ln@4K^O%7Db4B^bJDoyyq z4vS^eKe_7JpRm!${Y0i7SCtW`p72r4Z8uN85rW?5YK)w@z_GYP$jer&Xx@j768rSj zxqk>SNp4VMTol#hKU3yo699(18Rx*>_4z%(d~1EoVyYuarNwDUpS(l`G-7O4(x|s4 zA`?Et=KEHlwur9R!#Nfv{HeF7NDVj%Hgdk$0UPAvYWc-SHV~cXB->8 z!OJD~BrH%$_fyTnI*z$~rpEP0i}?rpbAz38@rSRU>wqH2OMvN#c~@M_UIbl}SJ6# zzpy_}=jP8Zf5Ywkj|Y$w&*-=fNIdzAZ<0?47=6Z68AlaiI=cftnze`3NCWC6G`@~S zBBGVFF#4jxG%K7JH_cc(71ku5HYkbJAhBs~^Kuh6T5eYY+(&OrmG>k3hxR4uH-fQ9 zHmY+><8-Z(p-dJX?UJ_t(zc@#>scKrZCRq#i>k$;D%(RZr|ZW_nY{;Yq}wCz9ERqv z=lT;H-ex{czv}GSIR6op#&&D?biu|Xo?l?eyMq@QDb-B$eh67xgXRSI*HGUn888E< zR0bUxIUl?vR5Pu?jm8Ff*jDPUA3PAGPsG0Ef>+(O6C+3W`DWV?$4BbTZ>P&2edDIJ zXvjRmOaJYdbYi0xKYB_Rd^G1sSortH6)n-sb8t}j-bQ}*eNcjxn2pW{y3HZGbGfmv!f;jTHyfBqu)PoNl=Pcb1!tp3QCGexym1z_ryi%h_~L4DZj6H z?&ZMj$ggy1L#6NQITAQl?|Ja>Eh6E5t;v;W{S7#yK5FL4bKGs9?eL`~w9`)^ssCHd zG~|zcBX5;!I%q33?%)*fX$YY(w7k}euOQNMIm#dirnO%d&6q1G)%@TQGxviG0dV*X zu@n$7x^m<%X-)u;6n97SvEPw-Dkzo1 zV>>Qy;!(Wb0{ri{C&^{Hi3>hC(g_|b^&fHi@urFt1v>v6KboNTf@2Pj6#v85Gku=4 zdH7pnRzF{Pd&~DNKXQAhA?|@pCM>)_i%K>tSj7ciNO{FPy(HXpy%m?7k~h ziToO%Gl)d1w(k<_NYlg{_A9=Ut4;gv;SY=LnUnX-QatA>6n!tV={ZFz`wN}$2`!3M zN-9%*VM3y-dw4k?CN*_w;tB36`OOHjn^<^Wjz|`|D_fC9`U)Bz!?PPT^; zGDns>ks!yj^oD}PVd{x@DRw5^!y-sV)g~Ua&II)Z7XUmUGJeWyv(M}V1S-c@G+co6 z61b&`YvY5YMj!e@v$6og4n9VnwF7^<6FTU@J7kVW z0;dMPFxAgL+|QD{`!d^SsoZB+>V`@C*;O+d41{GYT<-0ol0_$+&j9{i{W3^_!0mqQ zi+M%e#9jLqEzMIoCMut2 z=jrzn=mXn4jHxqMdA<>vT+EQs`_4Is$-+-F{!CFfSF!QR0)e@S58zGRIRYh-%2vt& zrJ{|qs-YVPjO!0>sOsWzF-1h32uUmbZZ9ZUv@p04B>#Fr$mnkPcd_pmm$iTiMy)Sy zY19xS)iPkyThr~Yg$t<+MdpDwooU8tMwCgD9%2lKeQSi@@y77gb{93{GZAWGk^*jQVlIvaVIM;IUgl@WOSJpe*`9^XHye@NmUna zDKM57?$)kM+vSqj$A}@5?q-^@GEy7*+XwGLMS;Ghq8LUD%LPG6^r~vpEMAId$Bg#U zpB3%Hi<{)R{CLg1k+HvR5aoy6=0sHO30FPtrfQwK@JC?FNP5zIDKAB*=Gqs3$;wi= zD^ne3;}q>QWf^FD@o>q$=j6)ODOX7vtfW31;YRc2hS@!AM(Y-W~h6z3#xnvc2O~kJV7+uP0tMHD`t!VSh$EkNG%H4RaUI zI%>wNv3)Du^I3V>iDs;H+g^|!`CRmwmPpd^x^=n8l_?tmEfsnY%=s%z=4UHn0d?s8 zoUQIy;W@p=N|gq+y5|~%hE`+@=}kq4GB3((aSv9Xfa8%5B*_ju|v{#xL9ndy=zbIm|r`*wN{X}i+k}!Phs3% zF@yL%TK@6GQ{G#+PLse&AU#{v)_a;@oApFD&KM01%x#0Ib@L@lcf#fXdy0J6_3$_3 zVkRXWKDm^2BI_XBYm#e|vrj17ny_U)e<*<(M|e5(B})fWsMQat<78hZk0$m8#i|S$ zc#i2R7WSIHJ_OK&MSKCTfk~L!u9&^1e}YK#I9l&ZSDLhFm3M-Goa;v;J!!&MwXaKA zXn>1RvWGMCt5W27ADsu?CqNk~(mIioG|Snm)VjjFQy--Rm=BgDL8qbfun}mdYO1Cs!$dC4w$?7Bs>Vm-)nj2qqMsa7GHexp-dx!v24GKxue9N(g-EMEa zKCykP1$;81+X|QwV0bCplB@o3p~n>*3T9n&wti3?=Yz@H=5piCPly!U?hBoTCjEY4 z|3fkoEz9o^LfUMt-?v#T=C(C#TD3XFPW*!WBL`_ainz^Zrf-$w?|D5}NX!O87+V^t zj59Oxz6OLMuzvOhHs`IqE%R|nWlF!Pj+}_#MVrTfW>WYA8pGd}%E} zeWV!MmUx^Eq5G|_Ln(-)5Gf)^u`y*Hk!R;Gv_)HamL02L&#Q5&ZeEh^N^ytPk;C!o zaRlD{nFevGP&jyz1a!rX)6^Raf!SE9OUbmci`;rt8O%@L^eZ zlSEP6WFM2P4t%>R^T)0Bg6f~IO2v*ns;uTAwq&`O=lHn<<6d_jbv3{bysPo*Fw89? zk!&DC$Eh9Y?Q;!GKLn-^)}RQ#n~2plmE}Y}Q}3os8ax zi8_-%23+E7Iu>|&cGa@``M}w9aLafHPCC47ZLCu4z#Et3ypzxIIw_sm-krAuaIYTi zm;0Z3qHMU%DrTnP$={ES`T2*De!LB=vcE1OQmz}%HM*$TTRRGsig2&bRYe(x^0pnX z&pjv^&-CkhxNWkN|K{UIsxXH*qK-j(GJ5ZSppEfn{5h15d(Qa8VFu2_uL@2!FTu#K zQT{8-h(?FBWctQza$sd{!XMhsqINrUOWfJ)W53lL+r!OaPq1D`^X)&n5*|H~9Yc|C zyL6%b3!84Kb|JQs7Cu6oUo}&x1MM1s0EX~06d(M+O78(xtfkb-kr}Wo7H8tl(_vPVd z!)&g7#G-IyYHjl!hW&%LA`rqWqU8+N#10XlLpfz2*ZEE}mn4({u$JPJ*^ z435QG>6wom81dY!aAM8hEEce|Gu{!b=Oz(ow;0?FvqY_vO|RpF8e!gYin`0hl7M6b zxi+kCP&_X^k>`v;_@{&_lwY9R7uoy{_8J&mPA$=VxFO3`bDaTYJl}^sb!?qIfhPmXCiZrF3ufFR<>huppQ@dT4%*C z(Dpw?`EtfSyIcg(sTc^aF-$s=^;de|Kcn!J>?F1Oun#h(D2p4KE?s@ptyTw(SPSH~gpZY*i!!-lAXNkf)abZrqd@Y8NrGnL&{i z*2z#^E+|eafKps9+@x6s&>M?p5xeYC+O{wMFHf*+Z7%mIheK6-+UJrH*T2T$VC{mH zy(Z^72i#CRnZG2;rN+T^Th~xOn@v+X(PwxgTG6SJ7E>+9*uF>?Vy^YWq5`)I+`izV zR2pusD?8vh3#6!bp2zJY<)1C}Cmh-S3wMR{A}9WZQKKAu((|yTy?46_ghOuTH)mG6 zAkPyEa^1rDcQxtYeM*Of%STVL=UY}M3EY+AUA_)r?BY(|bd5?eUd>;8BZ4|rNHG`+ z1LsG_%U*Vy;2~?BZq&+eD`DJg&KyDZoV1J&|e}6jU*HF}JlWTe^NWPOaTAy`#(1=bdoI>UG z@L!N2O~B~@NKhwD8P)vdKWLn4BvvouyrF`AFCDqZW5V39uj+}c$e2fz)BOM2({2R( zpEoh>ds9UJ)=Dvg`Tvh0jl7d5&1Of|87lwQ-IY<1`92#p-*`wBZkXul>91Sl{v+i7 zqu|(P?gT!T?7KUtZQN<%`_HH{8=Kw#1@fB* Date: Tue, 2 Apr 2024 03:04:02 +0000 Subject: [PATCH 20/37] style(pre-commit): autofix --- perception/autoware_crosswalk_traffic_light_estimator/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index 9bc071b26f074..514aeaf7813bc 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -26,7 +26,7 @@ | :---------------------------- | :------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------ | | `use_last_detect_color` | `bool` | If this parameter is `true`, this module estimates pedestrian's traffic signal as RED not only when vehicle's traffic signal is detected as GREEN/AMBER but also when detection results change GREEN/AMBER to UNKNOWN. (If detection results change RED or AMBER to UNKNOWN, this module estimates pedestrian's traffic signal as UNKNOWN.) If this parameter is `false`, this module use only latest detection results for estimation. (Only when the detection result is GREEN/AMBER, this module estimates pedestrian's traffic signal as RED.) | `true` | | `last_detect_color_hold_time` | `double` | The time threshold to hold for last detected vehicle traffic light color. | `2.0` | -| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedestrian traffic light color. | +| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedestrian traffic light color. | ## Inner-workings / Algorithms From 35b04c7a44ed087575b62025efe34d2b326d03eb Mon Sep 17 00:00:00 2001 From: Tao Zhong <55872497+tzhong518@users.noreply.github.com> Date: Wed, 3 Apr 2024 12:06:22 +0900 Subject: [PATCH 21/37] Update perception/traffic_light_map_based_detector/README.md Co-authored-by: Dmitrii Koldaev <39071246+dkoldaev@users.noreply.github.com> --- perception/autoware_traffic_light_map_based_detector/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/perception/autoware_traffic_light_map_based_detector/README.md b/perception/autoware_traffic_light_map_based_detector/README.md index 6a8e7bb476f03..a7baf9538eaa7 100644 --- a/perception/autoware_traffic_light_map_based_detector/README.md +++ b/perception/autoware_traffic_light_map_based_detector/README.md @@ -9,7 +9,7 @@ Calibration and vibration errors can be entered as parameters, and the size of t ![traffic_light_map_based_detector_result](./docs/traffic_light_map_based_detector_result.svg) If the node receives route information, it only looks at traffic lights on that route. -If the node receives no route information, it looks at a radius of max_detection_range and the angle between the traffic light and the camera is less than traffic_light_max_angle_range. +If the node receives no route information, it looks at a radius of `max_detection_range` and the angle between the traffic light and the camera is less than `traffic_light_max_angle_range`. ## Input topics From 4703cb83533bf97c8343361d65ac9140184817e7 Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Wed, 3 Apr 2024 14:58:30 +0900 Subject: [PATCH 22/37] fix: image size Signed-off-by: tzhong518 --- perception/autoware_crosswalk_traffic_light_estimator/README.md | 2 +- perception/autoware_traffic_light_occlusion_predictor/README.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index 514aeaf7813bc..0b4e2e818e9b7 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -104,7 +104,7 @@ end #### Update flashing flag
- +
## Assumptions / Known limits diff --git a/perception/autoware_traffic_light_occlusion_predictor/README.md b/perception/autoware_traffic_light_occlusion_predictor/README.md index 0a51de3dc2e3a..ab985c7b17c15 100644 --- a/perception/autoware_traffic_light_occlusion_predictor/README.md +++ b/perception/autoware_traffic_light_occlusion_predictor/README.md @@ -8,7 +8,7 @@ For each traffic light roi, hundreds of pixels would be selected and projected i ![image](images/occlusion.png) -If no point cloud is received or all point clouds have very large stamp difference with the camera image, the occlusion ratio of each roi would be set as 0. The signal whose occlusion ratio is larger than max_occlusion_ratio will be set as unknown type. +If no point cloud is received or all point clouds have very large stamp difference with the camera image, the occlusion ratio of each roi would be set as 0. The signal whose occlusion ratio is larger than `max_occlusion_ratio `will be set as unknown type. ## Input topics From fe232e5cfd6e6565d27fad55a7f7071f09932bc2 Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Tue, 3 Sep 2024 14:02:03 +0900 Subject: [PATCH 23/37] fix: topic name Signed-off-by: tzhong518 --- .../README.md | 4 ++-- .../crosswalk_traffic_light_estimator.launch.xml | 4 ++-- .../src/node.cpp | 4 ++-- .../map_based_prediction_node.hpp | 2 +- .../launch/map_based_prediction.launch.xml | 4 ++-- perception/autoware_traffic_light_arbiter/README.md | 6 +++--- .../launch/traffic_light_arbiter.launch.xml | 12 ++++++------ .../src/traffic_light_arbiter.cpp | 6 +++--- .../autoware_traffic_light_classifier/README.md | 2 +- .../launch/traffic_light_classifier.launch.xml | 4 ++-- .../src/traffic_light_classifier_node.cpp | 2 +- .../README.md | 4 ++-- .../traffic_light_multi_camera_fusion.launch.xml | 4 ++-- .../src/traffic_light_multi_camera_fusion_node.cpp | 4 ++-- .../README.md | 6 +++--- .../traffic_light_occlusion_predictor.launch.xml | 12 ++++++------ .../src/node.cpp | 6 +++--- .../launch/traffic_light_map_visualizer.launch.xml | 4 ++-- .../launch/traffic_light_roi_visualizer.launch.xml | 4 ++-- .../src/traffic_light_roi_visualizer/node.cpp | 2 +- 20 files changed, 48 insertions(+), 48 deletions(-) diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index 0b4e2e818e9b7..8c81ed68efa7b 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -12,13 +12,13 @@ | ------------------------------------ | ------------------------------------------------ | ------------------ | | `~/input/vector_map` | `autoware_map_msgs::msg::LaneletMapBin` | vector map | | `~/input/route` | `autoware_planning_msgs::msg::LaneletRoute` | route | -| `~/input/classified/traffic_signals` | `tier4_perception_msgs::msg::TrafficSignalArray` | classified signals | +| `~/input/classified/traffic_lights` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | classified signals | ### Output | Name | Type | Description | | -------------------------- | ------------------------------------------------------- | --------------------------------------------------------- | -| `~/output/traffic_signals` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | output that contains estimated pedestrian traffic signals | +| `~/output/traffic_lights` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | output that contains estimated pedestrian traffic signals | ## Parameters diff --git a/perception/autoware_crosswalk_traffic_light_estimator/launch/crosswalk_traffic_light_estimator.launch.xml b/perception/autoware_crosswalk_traffic_light_estimator/launch/crosswalk_traffic_light_estimator.launch.xml index 2e1437ecd7d93..9fb9c1346339f 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/launch/crosswalk_traffic_light_estimator.launch.xml +++ b/perception/autoware_crosswalk_traffic_light_estimator/launch/crosswalk_traffic_light_estimator.launch.xml @@ -16,8 +16,8 @@ - - + +
diff --git a/perception/autoware_crosswalk_traffic_light_estimator/src/node.cpp b/perception/autoware_crosswalk_traffic_light_estimator/src/node.cpp index 5d9f06c3432b5..13f31d92b99a7 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/src/node.cpp +++ b/perception/autoware_crosswalk_traffic_light_estimator/src/node.cpp @@ -92,11 +92,11 @@ CrosswalkTrafficLightEstimatorNode::CrosswalkTrafficLightEstimatorNode( "~/input/route", rclcpp::QoS{1}.transient_local(), std::bind(&CrosswalkTrafficLightEstimatorNode::onRoute, this, _1)); sub_traffic_light_array_ = create_subscription( - "~/input/classified/traffic_signals", rclcpp::QoS{1}, + "~/input/classified/traffic_lights", rclcpp::QoS{1}, std::bind(&CrosswalkTrafficLightEstimatorNode::onTrafficLightArray, this, _1)); pub_traffic_light_array_ = - this->create_publisher("~/output/traffic_signals", rclcpp::QoS{1}); + this->create_publisher("~/output/traffic_lights", rclcpp::QoS{1}); pub_processing_time_ = std::make_shared(this, "~/debug"); } diff --git a/perception/autoware_map_based_prediction/include/map_based_prediction/map_based_prediction_node.hpp b/perception/autoware_map_based_prediction/include/map_based_prediction/map_based_prediction_node.hpp index 1675f8b1fbfa2..8eccee58ca4c0 100644 --- a/perception/autoware_map_based_prediction/include/map_based_prediction/map_based_prediction_node.hpp +++ b/perception/autoware_map_based_prediction/include/map_based_prediction/map_based_prediction_node.hpp @@ -87,7 +87,7 @@ class MapBasedPredictionNode : public rclcpp::Node rclcpp::Subscription::SharedPtr sub_objects_; rclcpp::Subscription::SharedPtr sub_map_; autoware::universe_utils::InterProcessPollingSubscriber - sub_traffic_signals_{this, "/traffic_signals"}; + sub_traffic_signals_{this, "/traffic_lights"}; // debug publisher std::unique_ptr> stop_watch_ptr_; diff --git a/perception/autoware_map_based_prediction/launch/map_based_prediction.launch.xml b/perception/autoware_map_based_prediction/launch/map_based_prediction.launch.xml index 2c668639c2a56..915dc53002359 100644 --- a/perception/autoware_map_based_prediction/launch/map_based_prediction.launch.xml +++ b/perception/autoware_map_based_prediction/launch/map_based_prediction.launch.xml @@ -3,14 +3,14 @@ - + - + diff --git a/perception/autoware_traffic_light_arbiter/README.md b/perception/autoware_traffic_light_arbiter/README.md index 619154e1e183b..4260b50bfe9ec 100644 --- a/perception/autoware_traffic_light_arbiter/README.md +++ b/perception/autoware_traffic_light_arbiter/README.md @@ -28,14 +28,14 @@ The table below outlines how the matching process determines the output based on | Name | Type | Description | | -------------------------------- | ----------------------------------------------------- | -------------------------------------------------------- | | ~/sub/vector_map | autoware_map_msgs::msg::LaneletMapBin | The vector map to get valid traffic signal ids. | -| ~/sub/perception_traffic_signals | autoware_perception_msgs::msg::TrafficLightGroupArray | The traffic signals from the image recognition pipeline. | -| ~/sub/external_traffic_signals | autoware_perception_msgs::msg::TrafficLightGroupArray | The traffic signals from an external system. | +| ~/sub/perception_traffic_lights | autoware_perception_msgs::msg::TrafficLightGroupArray | The traffic signals from the image recognition pipeline. | +| ~/sub/external_traffic_lights | autoware_perception_msgs::msg::TrafficLightGroupArray | The traffic signals from an external system. | #### Output | Name | Type | Description | | --------------------- | ----------------------------------------------------- | -------------------------------- | -| ~/pub/traffic_signals | autoware_perception_msgs::msg::TrafficLightGroupArray | The merged traffic signal state. | +| ~/pub/traffic_lights | autoware_perception_msgs::msg::TrafficLightGroupArray | The merged traffic signal state. | ## Parameters diff --git a/perception/autoware_traffic_light_arbiter/launch/traffic_light_arbiter.launch.xml b/perception/autoware_traffic_light_arbiter/launch/traffic_light_arbiter.launch.xml index 8e2b9e8cf02d3..3ec9c13a4d1e5 100644 --- a/perception/autoware_traffic_light_arbiter/launch/traffic_light_arbiter.launch.xml +++ b/perception/autoware_traffic_light_arbiter/launch/traffic_light_arbiter.launch.xml @@ -1,16 +1,16 @@ - - - + + + - - - + + + diff --git a/perception/autoware_traffic_light_arbiter/src/traffic_light_arbiter.cpp b/perception/autoware_traffic_light_arbiter/src/traffic_light_arbiter.cpp index e71629fa5dd28..9898840089fea 100644 --- a/perception/autoware_traffic_light_arbiter/src/traffic_light_arbiter.cpp +++ b/perception/autoware_traffic_light_arbiter/src/traffic_light_arbiter.cpp @@ -85,14 +85,14 @@ TrafficLightArbiter::TrafficLightArbiter(const rclcpp::NodeOptions & options) std::bind(&TrafficLightArbiter::onMap, this, std::placeholders::_1)); perception_tlr_sub_ = create_subscription( - "~/sub/perception_traffic_signals", rclcpp::QoS(1), + "~/sub/perception_traffic_lights", rclcpp::QoS(1), std::bind(&TrafficLightArbiter::onPerceptionMsg, this, std::placeholders::_1)); external_tlr_sub_ = create_subscription( - "~/sub/external_traffic_signals", rclcpp::QoS(1), + "~/sub/external_traffic_lights", rclcpp::QoS(1), std::bind(&TrafficLightArbiter::onExternalMsg, this, std::placeholders::_1)); - pub_ = create_publisher("~/pub/traffic_signals", rclcpp::QoS(1)); + pub_ = create_publisher("~/pub/traffic_lights", rclcpp::QoS(1)); } void TrafficLightArbiter::onMap(const LaneletMapBin::ConstSharedPtr msg) diff --git a/perception/autoware_traffic_light_classifier/README.md b/perception/autoware_traffic_light_classifier/README.md index 2315ac17364a3..85b1c331fa590 100644 --- a/perception/autoware_traffic_light_classifier/README.md +++ b/perception/autoware_traffic_light_classifier/README.md @@ -52,7 +52,7 @@ These colors and shapes are assigned to the message as follows: | Name | Type | Description | | -------------------------- | ----------------------------------------------- | ------------------- | -| `~/output/traffic_signals` | `tier4_perception_msgs::msg::TrafficLightArray` | classified signals | +| `~/output/traffic_lights` | `tier4_perception_msgs::msg::TrafficLightArray` | classified signals | | `~/output/debug/image` | `sensor_msgs::msg::Image` | image for debugging | ## Parameters diff --git a/perception/autoware_traffic_light_classifier/launch/traffic_light_classifier.launch.xml b/perception/autoware_traffic_light_classifier/launch/traffic_light_classifier.launch.xml index d0cbbd3dcae9b..42ffff49848a1 100644 --- a/perception/autoware_traffic_light_classifier/launch/traffic_light_classifier.launch.xml +++ b/perception/autoware_traffic_light_classifier/launch/traffic_light_classifier.launch.xml @@ -1,14 +1,14 @@ - + - + diff --git a/perception/autoware_traffic_light_classifier/src/traffic_light_classifier_node.cpp b/perception/autoware_traffic_light_classifier/src/traffic_light_classifier_node.cpp index 796a144bf8266..cca9c810fdc6b 100644 --- a/perception/autoware_traffic_light_classifier/src/traffic_light_classifier_node.cpp +++ b/perception/autoware_traffic_light_classifier/src/traffic_light_classifier_node.cpp @@ -43,7 +43,7 @@ TrafficLightClassifierNodelet::TrafficLightClassifierNodelet(const rclcpp::NodeO } traffic_signal_array_pub_ = this->create_publisher( - "~/output/traffic_signals", rclcpp::QoS{1}); + "~/output/traffic_lights", rclcpp::QoS{1}); using std::chrono_literals::operator""ms; timer_ = rclcpp::create_timer( diff --git a/perception/autoware_traffic_light_multi_camera_fusion/README.md b/perception/autoware_traffic_light_multi_camera_fusion/README.md index f7ee294cda147..54df5e703cb76 100644 --- a/perception/autoware_traffic_light_multi_camera_fusion/README.md +++ b/perception/autoware_traffic_light_multi_camera_fusion/README.md @@ -15,7 +15,7 @@ For every camera, the following three topics are subscribed: | -------------------------------------- | ---------------------------------------------- | --------------------------------------------------- | | `~//camera_info` | sensor_msgs::CameraInfo | camera info from traffic_light_map_based_detector | | `~//rois` | tier4_perception_msgs::TrafficLightRoiArray | detection roi from traffic_light_fine_detector | -| `~//traffic_signals` | tier4_perception_msgs::TrafficLightSignalArray | classification result from traffic_light_classifier | +| `~//traffic_lights` | tier4_perception_msgs::TrafficLightSignalArray | classification result from traffic_light_classifier | You don't need to configure these topics manually. Just provide the `camera_namespaces` parameter and the node will automatically extract the `` and create the subscribers. @@ -23,7 +23,7 @@ You don't need to configure these topics manually. Just provide the `camera_name | Name | Type | Description | | -------------------------- | ------------------------------------------------- | ---------------------------------- | -| `~/output/traffic_signals` | autoware_perception_msgs::TrafficLightSignalArray | traffic light signal fusion result | +| `~/output/traffic_lights` | autoware_perception_msgs::TrafficLightSignalArray | traffic light signal fusion result | ## Node parameters diff --git a/perception/autoware_traffic_light_multi_camera_fusion/launch/traffic_light_multi_camera_fusion.launch.xml b/perception/autoware_traffic_light_multi_camera_fusion/launch/traffic_light_multi_camera_fusion.launch.xml index 32e3417cf9029..5d79373991013 100644 --- a/perception/autoware_traffic_light_multi_camera_fusion/launch/traffic_light_multi_camera_fusion.launch.xml +++ b/perception/autoware_traffic_light_multi_camera_fusion/launch/traffic_light_multi_camera_fusion.launch.xml @@ -2,11 +2,11 @@ - + - + diff --git a/perception/autoware_traffic_light_multi_camera_fusion/src/traffic_light_multi_camera_fusion_node.cpp b/perception/autoware_traffic_light_multi_camera_fusion/src/traffic_light_multi_camera_fusion_node.cpp index 67d6102545e47..7ae20e9c38089 100644 --- a/perception/autoware_traffic_light_multi_camera_fusion/src/traffic_light_multi_camera_fusion_node.cpp +++ b/perception/autoware_traffic_light_multi_camera_fusion/src/traffic_light_multi_camera_fusion_node.cpp @@ -154,7 +154,7 @@ MultiCameraFusion::MultiCameraFusion(const rclcpp::NodeOptions & node_options) is_approximate_sync_ = this->declare_parameter("approximate_sync", false); message_lifespan_ = this->declare_parameter("message_lifespan", 0.09); for (const std::string & camera_ns : camera_namespaces) { - std::string signal_topic = camera_ns + "/classification/traffic_signals"; + std::string signal_topic = camera_ns + "/classification/traffic_lights"; std::string roi_topic = camera_ns + "/detection/rois"; std::string cam_info_topic = camera_ns + "/camera_info"; roi_subs_.emplace_back( @@ -181,7 +181,7 @@ MultiCameraFusion::MultiCameraFusion(const rclcpp::NodeOptions & node_options) map_sub_ = create_subscription( "~/input/vector_map", rclcpp::QoS{1}.transient_local(), std::bind(&MultiCameraFusion::mapCallback, this, _1)); - signal_pub_ = create_publisher("~/output/traffic_signals", rclcpp::QoS{1}); + signal_pub_ = create_publisher("~/output/traffic_lights", rclcpp::QoS{1}); } void MultiCameraFusion::trafficSignalRoiCallback( diff --git a/perception/autoware_traffic_light_occlusion_predictor/README.md b/perception/autoware_traffic_light_occlusion_predictor/README.md index ab985c7b17c15..dcb896fca819f 100644 --- a/perception/autoware_traffic_light_occlusion_predictor/README.md +++ b/perception/autoware_traffic_light_occlusion_predictor/README.md @@ -15,8 +15,8 @@ If no point cloud is received or all point clouds have very large stamp differen | Name | Type | Description | | ------------------------------------ | --------------------------------------------------- | -------------------------------- | | `~input/vector_map` | autoware_auto_mapping_msgs::HADMapBin | vector map | -| `~/input/car/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | vehicular traffic light signals | -| `~/input/pedestrian/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | pedestrian traffic light signals | +| `~/input/car/traffic_lights` | tier4_perception_msgs::msg::TrafficLightArray | vehicular traffic light signals | +| `~/input/pedestrian/traffic_lights` | tier4_perception_msgs::msg::TrafficLightArray | pedestrian traffic light signals | | `~/input/rois` | autoware_auto_perception_msgs::TrafficLightRoiArray | traffic light detections | | `~input/camera_info` | sensor_msgs::CameraInfo | target camera parameter | | `~/input/cloud` | sensor_msgs::PointCloud2 | LiDAR point cloud | @@ -25,7 +25,7 @@ If no point cloud is received or all point clouds have very large stamp differen | Name | Type | Description | | -------------------------- | --------------------------------------------- | ------------------------------------------------------------ | -| `~/output/traffic_signals` | tier4_perception_msgs::msg::TrafficLightArray | traffic light signals reset according to the occlusion ratio | +| `~/output/traffic_lights` | tier4_perception_msgs::msg::TrafficLightArray | traffic light signals reset according to the occlusion ratio | ## Node parameters diff --git a/perception/autoware_traffic_light_occlusion_predictor/launch/traffic_light_occlusion_predictor.launch.xml b/perception/autoware_traffic_light_occlusion_predictor/launch/traffic_light_occlusion_predictor.launch.xml index d59d5a7717297..d9fae26b7fd55 100644 --- a/perception/autoware_traffic_light_occlusion_predictor/launch/traffic_light_occlusion_predictor.launch.xml +++ b/perception/autoware_traffic_light_occlusion_predictor/launch/traffic_light_occlusion_predictor.launch.xml @@ -4,9 +4,9 @@ - - - + + + @@ -14,9 +14,9 @@ - - - + + + diff --git a/perception/autoware_traffic_light_occlusion_predictor/src/node.cpp b/perception/autoware_traffic_light_occlusion_predictor/src/node.cpp index 16498eb4d7094..99d76e2004c90 100644 --- a/perception/autoware_traffic_light_occlusion_predictor/src/node.cpp +++ b/perception/autoware_traffic_light_occlusion_predictor/src/node.cpp @@ -58,7 +58,7 @@ TrafficLightOcclusionPredictorNode::TrafficLightOcclusionPredictorNode( // publishers signal_pub_ = - create_publisher("~/output/traffic_signals", 1); + create_publisher("~/output/traffic_lights", 1); // configuration parameters config_.azimuth_occlusion_resolution_deg = @@ -75,7 +75,7 @@ TrafficLightOcclusionPredictorNode::TrafficLightOcclusionPredictorNode( config_.elevation_occlusion_resolution_deg); const std::vector topics{ - "~/input/car/traffic_signals", "~/input/rois", "~/input/camera_info", "~/input/cloud"}; + "~/input/car/traffic_lights", "~/input/rois", "~/input/camera_info", "~/input/cloud"}; const std::vector qos(topics.size(), rclcpp::SensorDataQoS()); synchronizer_ = std::make_shared( this, topics, qos, @@ -85,7 +85,7 @@ TrafficLightOcclusionPredictorNode::TrafficLightOcclusionPredictorNode( config_.max_image_cloud_delay, config_.max_wait_t); const std::vector topics_ped{ - "~/input/pedestrian/traffic_signals", "~/input/rois", "~/input/camera_info", "~/input/cloud"}; + "~/input/pedestrian/traffic_lights", "~/input/rois", "~/input/camera_info", "~/input/cloud"}; const std::vector qos_ped(topics_ped.size(), rclcpp::SensorDataQoS()); synchronizer_ped_ = std::make_shared( this, topics_ped, qos_ped, diff --git a/perception/autoware_traffic_light_visualization/launch/traffic_light_map_visualizer.launch.xml b/perception/autoware_traffic_light_visualization/launch/traffic_light_map_visualizer.launch.xml index 8ff56915766aa..1c580cd7ecbdb 100644 --- a/perception/autoware_traffic_light_visualization/launch/traffic_light_map_visualizer.launch.xml +++ b/perception/autoware_traffic_light_visualization/launch/traffic_light_map_visualizer.launch.xml @@ -1,7 +1,7 @@ - + - + diff --git a/perception/autoware_traffic_light_visualization/launch/traffic_light_roi_visualizer.launch.xml b/perception/autoware_traffic_light_visualization/launch/traffic_light_roi_visualizer.launch.xml index d4af7a27636df..be61276d58d7b 100644 --- a/perception/autoware_traffic_light_visualization/launch/traffic_light_roi_visualizer.launch.xml +++ b/perception/autoware_traffic_light_visualization/launch/traffic_light_roi_visualizer.launch.xml @@ -2,7 +2,7 @@ - + @@ -11,7 +11,7 @@ - + diff --git a/perception/autoware_traffic_light_visualization/src/traffic_light_roi_visualizer/node.cpp b/perception/autoware_traffic_light_visualization/src/traffic_light_roi_visualizer/node.cpp index 7ef13cf457c07..891011b8cac7a 100644 --- a/perception/autoware_traffic_light_visualization/src/traffic_light_roi_visualizer/node.cpp +++ b/perception/autoware_traffic_light_visualization/src/traffic_light_roi_visualizer/node.cpp @@ -76,7 +76,7 @@ void TrafficLightRoiVisualizerNode::connectCb() image_sub_.subscribe(this, "~/input/image", "raw", rmw_qos_profile_sensor_data); roi_sub_.subscribe(this, "~/input/rois", rclcpp::QoS{1}.get_rmw_qos_profile()); traffic_signals_sub_.subscribe( - this, "~/input/traffic_signals", rclcpp::QoS{1}.get_rmw_qos_profile()); + this, "~/input/traffic_lights", rclcpp::QoS{1}.get_rmw_qos_profile()); if (enable_fine_detection_) { rough_roi_sub_.subscribe(this, "~/input/rough/rois", rclcpp::QoS{1}.get_rmw_qos_profile()); } From 18849ecd333af1e31eddd7e5f1aaf659045dbcf8 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Tue, 3 Sep 2024 04:01:36 +0000 Subject: [PATCH 24/37] style(pre-commit): autofix --- perception/autoware_traffic_light_occlusion_predictor/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/perception/autoware_traffic_light_occlusion_predictor/README.md b/perception/autoware_traffic_light_occlusion_predictor/README.md index dcb896fca819f..b36cb3d78e852 100644 --- a/perception/autoware_traffic_light_occlusion_predictor/README.md +++ b/perception/autoware_traffic_light_occlusion_predictor/README.md @@ -8,7 +8,7 @@ For each traffic light roi, hundreds of pixels would be selected and projected i ![image](images/occlusion.png) -If no point cloud is received or all point clouds have very large stamp difference with the camera image, the occlusion ratio of each roi would be set as 0. The signal whose occlusion ratio is larger than `max_occlusion_ratio `will be set as unknown type. +If no point cloud is received or all point clouds have very large stamp difference with the camera image, the occlusion ratio of each roi would be set as 0. The signal whose occlusion ratio is larger than `max_occlusion_ratio`will be set as unknown type. ## Input topics From f3efde58f0c8aa3e6b05d7cd1d65ea16c67508fc Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Thu, 5 Sep 2024 12:55:06 +0900 Subject: [PATCH 25/37] fix: topic names in launch file Signed-off-by: tzhong518 --- .../traffic_light.launch.xml | 24 +++++++-------- .../traffic_light_node_container.launch.py | 29 ++++++++++++++----- .../launch/map_based_prediction.launch.xml | 4 +-- ...affic_light_multi_camera_fusion.launch.xml | 2 +- 4 files changed, 36 insertions(+), 23 deletions(-) diff --git a/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light.launch.xml b/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light.launch.xml index d4a0a8b5abdd8..eac82e18f4d35 100644 --- a/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light.launch.xml +++ b/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light.launch.xml @@ -4,10 +4,10 @@ - - - - + + + + @@ -60,16 +60,16 @@ - + - - - + + + @@ -77,16 +77,16 @@ - - + + - - + + diff --git a/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_node_container.launch.py b/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_node_container.launch.py index 9603570b3cfe7..41e80c49210d4 100644 --- a/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_node_container.launch.py +++ b/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_node_container.launch.py @@ -99,9 +99,9 @@ def create_parameter_dict(*args): namespace="classification", parameters=[car_traffic_light_classifier_model_param], remappings=[ - ("~/input/image", camera_arguments["input/image"]), - ("~/input/rois", camera_arguments["output/rois"]), - ("~/output/traffic_signals", "classified/car/traffic_signals"), + ("~/input/image", LaunchConfiguration("input/image")), + ("~/input/rois", LaunchConfiguration("output/rois")), + ("~/output/traffic_lights", "classified/car/traffic_lights"), ], extra_arguments=[ {"use_intra_process_comms": LaunchConfiguration("use_intra_process")} @@ -114,9 +114,9 @@ def create_parameter_dict(*args): namespace="classification", parameters=[pedestrian_traffic_light_classifier_model_param], remappings=[ - ("~/input/image", camera_arguments["input/image"]), - ("~/input/rois", camera_arguments["output/rois"]), - ("~/output/traffic_signals", "classified/pedestrian/traffic_signals"), + ("~/input/image", LaunchConfiguration("input/image")), + ("~/input/rois", LaunchConfiguration("output/rois")), + ("~/output/traffic_lights", "classified/pedestrian/traffic_lights"), ], extra_arguments=[ {"use_intra_process_comms": LaunchConfiguration("use_intra_process")} @@ -132,8 +132,8 @@ def create_parameter_dict(*args): ("~/input/rois", camera_arguments["output/rois"]), ("~/input/rough/rois", "detection/rough/rois"), ( - "~/input/traffic_signals", - camera_arguments["output/traffic_signals"], + "~/input/traffic_lights", + LaunchConfiguration("output/traffic_lights"), ), ("~/output/image", "debug/rois"), ("~/output/image/compressed", "debug/rois/compressed"), @@ -217,6 +217,19 @@ def add_launch_arg(name: str, default_value=None, description=None): add_launch_arg("enable_image_decompressor", "True") add_launch_arg("enable_fine_detection", "True") add_launch_arg("use_image_transport", "True") + add_launch_arg("input/image", "/sensing/camera/traffic_light/image_raw") + add_launch_arg("output/rois", "/perception/traffic_light_recognition/rois") + add_launch_arg( + "output/traffic_lights", + "/perception/traffic_light_recognition/traffic_signals", + ) + add_launch_arg( + "output/car/traffic_lights", "/perception/traffic_light_recognition/car/traffic_lights" + ) + add_launch_arg( + "output/pedestrian/traffic_lights", + "/perception/traffic_light_recognition/pedestrian/traffic_lights", + ) # traffic_light_fine_detector add_launch_arg( diff --git a/perception/autoware_map_based_prediction/launch/map_based_prediction.launch.xml b/perception/autoware_map_based_prediction/launch/map_based_prediction.launch.xml index 915dc53002359..00b839e07355e 100644 --- a/perception/autoware_map_based_prediction/launch/map_based_prediction.launch.xml +++ b/perception/autoware_map_based_prediction/launch/map_based_prediction.launch.xml @@ -3,14 +3,14 @@ - + - + diff --git a/perception/autoware_traffic_light_multi_camera_fusion/launch/traffic_light_multi_camera_fusion.launch.xml b/perception/autoware_traffic_light_multi_camera_fusion/launch/traffic_light_multi_camera_fusion.launch.xml index 5d79373991013..430aa0056d9ec 100644 --- a/perception/autoware_traffic_light_multi_camera_fusion/launch/traffic_light_multi_camera_fusion.launch.xml +++ b/perception/autoware_traffic_light_multi_camera_fusion/launch/traffic_light_multi_camera_fusion.launch.xml @@ -2,7 +2,7 @@ - + From 0a6a4be3223c439b20dc9ab70d3d47c32aa9215c Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Thu, 5 Sep 2024 13:24:25 +0900 Subject: [PATCH 26/37] fix: descriptions Signed-off-by: tzhong518 --- .../autoware_crosswalk_traffic_light_estimator/README.md | 6 +++--- perception/autoware_traffic_light_classifier/README.md | 2 +- .../autoware_traffic_light_map_based_detector/README.md | 2 +- 3 files changed, 5 insertions(+), 5 deletions(-) diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index 8c81ed68efa7b..eb14de539ee8c 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -11,7 +11,7 @@ | Name | Type | Description | | ------------------------------------ | ------------------------------------------------ | ------------------ | | `~/input/vector_map` | `autoware_map_msgs::msg::LaneletMapBin` | vector map | -| `~/input/route` | `autoware_planning_msgs::msg::LaneletRoute` | route | +| `~/input/route` | `autoware_planning_msgs::msg::LaneletRoute` | route (optional) | | `~/input/classified/traffic_lights` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | classified signals | ### Output @@ -25,8 +25,8 @@ | Name | Type | Description | Default value | | :---------------------------- | :------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------ | | `use_last_detect_color` | `bool` | If this parameter is `true`, this module estimates pedestrian's traffic signal as RED not only when vehicle's traffic signal is detected as GREEN/AMBER but also when detection results change GREEN/AMBER to UNKNOWN. (If detection results change RED or AMBER to UNKNOWN, this module estimates pedestrian's traffic signal as UNKNOWN.) If this parameter is `false`, this module use only latest detection results for estimation. (Only when the detection result is GREEN/AMBER, this module estimates pedestrian's traffic signal as RED.) | `true` | -| `last_detect_color_hold_time` | `double` | The time threshold to hold for last detected vehicle traffic light color. | `2.0` | -| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedestrian traffic light color. | +| `last_detect_color_hold_time` | `double` | The time threshold to hold for last detected vehicle traffic light color. The unit is second. | `2.0` | +| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedestrian traffic light color. The unit is second. | ## Inner-workings / Algorithms diff --git a/perception/autoware_traffic_light_classifier/README.md b/perception/autoware_traffic_light_classifier/README.md index 85b1c331fa590..d110391d12d4b 100644 --- a/perception/autoware_traffic_light_classifier/README.md +++ b/perception/autoware_traffic_light_classifier/README.md @@ -63,7 +63,7 @@ These colors and shapes are assigned to the message as follows: | ----------------------------- | ----- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `classifier_type` | int | if the value is `1`, cnn_classifier is used | | `data_path` | str | packages data and artifacts directory path | -| `backlight_threshold` | float | If the intensity get grater than this overwrite with UNKNOWN in corresponding RoI. Note that, if the value is much higher, the node only overwrites in the harsher backlight situations. Therefore, If you wouldn't like to use this feature set this value to `1.0`. The value can be `[0.0, 1.0]`. The confidence of overwritten signal is set to `0.0`. | +| `backlight_threshold` | float | If the intensity of light is grater than this threshold, the class of the corresponding RoI will be overwritten with UNKNOWN, and the confidence of the overwritten signal will be set to `0.0`. The value should be set in the range of `[0.0, 1.0]`. If you wouldn't like to use this feature, please set it to `1.0`. | | `classify_traffic_light_type` | int | if the value is `0`, the classifier classifies the vehicular signals. if the value is `1`, it classifies the pedestrian signals. | ### Core Parameters diff --git a/perception/autoware_traffic_light_map_based_detector/README.md b/perception/autoware_traffic_light_map_based_detector/README.md index a7baf9538eaa7..201fa791b378e 100644 --- a/perception/autoware_traffic_light_map_based_detector/README.md +++ b/perception/autoware_traffic_light_map_based_detector/README.md @@ -9,7 +9,7 @@ Calibration and vibration errors can be entered as parameters, and the size of t ![traffic_light_map_based_detector_result](./docs/traffic_light_map_based_detector_result.svg) If the node receives route information, it only looks at traffic lights on that route. -If the node receives no route information, it looks at a radius of `max_detection_range` and the angle between the traffic light and the camera is less than `traffic_light_max_angle_range`. +If the node receives no route information, it looks at traffic lights within a radius of `max_detection_range`. If the angle between the traffic light and the camera is larger than `traffic_light_max_angle_range`, it will be filtered. ## Input topics From 520454bc09ea2f6c0e8de1b94c0743c4c200a837 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Tue, 3 Sep 2024 05:06:48 +0000 Subject: [PATCH 27/37] style(pre-commit): autofix --- .../README.md | 6 +++--- .../autoware_traffic_light_arbiter/README.md | 10 +++++----- .../autoware_traffic_light_classifier/README.md | 6 +++--- .../README.md | 12 ++++++------ .../README.md | 16 ++++++++-------- 5 files changed, 25 insertions(+), 25 deletions(-) diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index eb14de539ee8c..d332b53dd77f2 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -12,12 +12,12 @@ | ------------------------------------ | ------------------------------------------------ | ------------------ | | `~/input/vector_map` | `autoware_map_msgs::msg::LaneletMapBin` | vector map | | `~/input/route` | `autoware_planning_msgs::msg::LaneletRoute` | route (optional) | -| `~/input/classified/traffic_lights` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | classified signals | +| `~/input/classified/traffic_lights` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | classified signals | ### Output -| Name | Type | Description | -| -------------------------- | ------------------------------------------------------- | --------------------------------------------------------- | +| Name | Type | Description | +| ------------------------- | ------------------------------------------------------- | --------------------------------------------------------- | | `~/output/traffic_lights` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | output that contains estimated pedestrian traffic signals | ## Parameters diff --git a/perception/autoware_traffic_light_arbiter/README.md b/perception/autoware_traffic_light_arbiter/README.md index 4260b50bfe9ec..2d185bfc4f9e3 100644 --- a/perception/autoware_traffic_light_arbiter/README.md +++ b/perception/autoware_traffic_light_arbiter/README.md @@ -25,16 +25,16 @@ The table below outlines how the matching process determines the output based on #### Input -| Name | Type | Description | -| -------------------------------- | ----------------------------------------------------- | -------------------------------------------------------- | -| ~/sub/vector_map | autoware_map_msgs::msg::LaneletMapBin | The vector map to get valid traffic signal ids. | +| Name | Type | Description | +| ------------------------------- | ----------------------------------------------------- | -------------------------------------------------------- | +| ~/sub/vector_map | autoware_map_msgs::msg::LaneletMapBin | The vector map to get valid traffic signal ids. | | ~/sub/perception_traffic_lights | autoware_perception_msgs::msg::TrafficLightGroupArray | The traffic signals from the image recognition pipeline. | | ~/sub/external_traffic_lights | autoware_perception_msgs::msg::TrafficLightGroupArray | The traffic signals from an external system. | #### Output -| Name | Type | Description | -| --------------------- | ----------------------------------------------------- | -------------------------------- | +| Name | Type | Description | +| -------------------- | ----------------------------------------------------- | -------------------------------- | | ~/pub/traffic_lights | autoware_perception_msgs::msg::TrafficLightGroupArray | The merged traffic signal state. | ## Parameters diff --git a/perception/autoware_traffic_light_classifier/README.md b/perception/autoware_traffic_light_classifier/README.md index d110391d12d4b..0f36861e234a3 100644 --- a/perception/autoware_traffic_light_classifier/README.md +++ b/perception/autoware_traffic_light_classifier/README.md @@ -50,10 +50,10 @@ These colors and shapes are assigned to the message as follows: ### Output -| Name | Type | Description | -| -------------------------- | ----------------------------------------------- | ------------------- | +| Name | Type | Description | +| ------------------------- | ----------------------------------------------- | ------------------- | | `~/output/traffic_lights` | `tier4_perception_msgs::msg::TrafficLightArray` | classified signals | -| `~/output/debug/image` | `sensor_msgs::msg::Image` | image for debugging | +| `~/output/debug/image` | `sensor_msgs::msg::Image` | image for debugging | ## Parameters diff --git a/perception/autoware_traffic_light_multi_camera_fusion/README.md b/perception/autoware_traffic_light_multi_camera_fusion/README.md index 54df5e703cb76..6a4f26585a551 100644 --- a/perception/autoware_traffic_light_multi_camera_fusion/README.md +++ b/perception/autoware_traffic_light_multi_camera_fusion/README.md @@ -11,18 +11,18 @@ For every camera, the following three topics are subscribed: -| Name | Type | Description | -| -------------------------------------- | ---------------------------------------------- | --------------------------------------------------- | -| `~//camera_info` | sensor_msgs::CameraInfo | camera info from traffic_light_map_based_detector | -| `~//rois` | tier4_perception_msgs::TrafficLightRoiArray | detection roi from traffic_light_fine_detector | +| Name | Type | Description | +| ------------------------------------- | ---------------------------------------------- | --------------------------------------------------- | +| `~//camera_info` | sensor_msgs::CameraInfo | camera info from traffic_light_map_based_detector | +| `~//rois` | tier4_perception_msgs::TrafficLightRoiArray | detection roi from traffic_light_fine_detector | | `~//traffic_lights` | tier4_perception_msgs::TrafficLightSignalArray | classification result from traffic_light_classifier | You don't need to configure these topics manually. Just provide the `camera_namespaces` parameter and the node will automatically extract the `` and create the subscribers. ## Output topics -| Name | Type | Description | -| -------------------------- | ------------------------------------------------- | ---------------------------------- | +| Name | Type | Description | +| ------------------------- | ------------------------------------------------- | ---------------------------------- | | `~/output/traffic_lights` | autoware_perception_msgs::TrafficLightSignalArray | traffic light signal fusion result | ## Node parameters diff --git a/perception/autoware_traffic_light_occlusion_predictor/README.md b/perception/autoware_traffic_light_occlusion_predictor/README.md index b36cb3d78e852..d2473c6acf701 100644 --- a/perception/autoware_traffic_light_occlusion_predictor/README.md +++ b/perception/autoware_traffic_light_occlusion_predictor/README.md @@ -12,19 +12,19 @@ If no point cloud is received or all point clouds have very large stamp differen ## Input topics -| Name | Type | Description | -| ------------------------------------ | --------------------------------------------------- | -------------------------------- | -| `~input/vector_map` | autoware_auto_mapping_msgs::HADMapBin | vector map | +| Name | Type | Description | +| ----------------------------------- | --------------------------------------------------- | -------------------------------- | +| `~input/vector_map` | autoware_auto_mapping_msgs::HADMapBin | vector map | | `~/input/car/traffic_lights` | tier4_perception_msgs::msg::TrafficLightArray | vehicular traffic light signals | | `~/input/pedestrian/traffic_lights` | tier4_perception_msgs::msg::TrafficLightArray | pedestrian traffic light signals | -| `~/input/rois` | autoware_auto_perception_msgs::TrafficLightRoiArray | traffic light detections | -| `~input/camera_info` | sensor_msgs::CameraInfo | target camera parameter | -| `~/input/cloud` | sensor_msgs::PointCloud2 | LiDAR point cloud | +| `~/input/rois` | autoware_auto_perception_msgs::TrafficLightRoiArray | traffic light detections | +| `~input/camera_info` | sensor_msgs::CameraInfo | target camera parameter | +| `~/input/cloud` | sensor_msgs::PointCloud2 | LiDAR point cloud | ## Output topics -| Name | Type | Description | -| -------------------------- | --------------------------------------------- | ------------------------------------------------------------ | +| Name | Type | Description | +| ------------------------- | --------------------------------------------- | ------------------------------------------------------------ | | `~/output/traffic_lights` | tier4_perception_msgs::msg::TrafficLightArray | traffic light signals reset according to the occlusion ratio | ## Node parameters From bf207de554e165f0864ec179ef462c5bc17c1d7f Mon Sep 17 00:00:00 2001 From: Tao Zhong <55872497+tzhong518@users.noreply.github.com> Date: Thu, 5 Sep 2024 13:03:18 +0900 Subject: [PATCH 28/37] Update perception/autoware_traffic_light_classifier/README.md Co-authored-by: Kenzo Lobos Tsunekawa --- perception/autoware_traffic_light_classifier/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/perception/autoware_traffic_light_classifier/README.md b/perception/autoware_traffic_light_classifier/README.md index 0f36861e234a3..1c05a0b4ca839 100644 --- a/perception/autoware_traffic_light_classifier/README.md +++ b/perception/autoware_traffic_light_classifier/README.md @@ -17,7 +17,7 @@ The information of the models is listed here: | EfficientNet-b1 | 128 x 128 | 99.76% | | MobileNet-v2 | 224 x 224 | 99.81% | -For pedestrian signals, totally 21199 (17860 for training, 2114 for evaluation and 1225 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. +For pedestrian signals, a total of 21199 (17860 for training, 2114 for evaluation and 1225 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. The information of the models is listed here: | Name | Input Size | Test Accuracy | | --------------- | ---------- | ------------- | From f1a384f493cc10c7b208843359b7d297a1db5b6e Mon Sep 17 00:00:00 2001 From: Tao Zhong <55872497+tzhong518@users.noreply.github.com> Date: Thu, 5 Sep 2024 13:03:25 +0900 Subject: [PATCH 29/37] Update perception/autoware_traffic_light_classifier/README.md Co-authored-by: Kenzo Lobos Tsunekawa --- perception/autoware_traffic_light_classifier/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/perception/autoware_traffic_light_classifier/README.md b/perception/autoware_traffic_light_classifier/README.md index 1c05a0b4ca839..cd0b4ec4a9d21 100644 --- a/perception/autoware_traffic_light_classifier/README.md +++ b/perception/autoware_traffic_light_classifier/README.md @@ -10,7 +10,7 @@ traffic_light_classifier is a package for classifying traffic light labels using Traffic light labels are classified by EfficientNet-b1 or MobileNet-v2. We trained classifiers for vehicular signals and pedestrian signals separately. -For vehicular signals, totally 83400 (58600 for training, 14800 for evaluation and 10000 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. +For vehicular signals, a total of 83400 (58600 for training, 14800 for evaluation and 10000 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. The information of the models is listed here: | Name | Input Size | Test Accuracy | | --------------- | ---------- | ------------- | From 5791db03e9daa56512786c1373ceb71f3466a291 Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Fri, 6 Dec 2024 14:19:27 +0900 Subject: [PATCH 30/37] fix: conflict Signed-off-by: tzhong518 --- .../traffic_light_node_container.launch.py | 29 +++++-------------- ...raffic_light_occlusion_predictor.launch.py | 6 ++-- 2 files changed, 11 insertions(+), 24 deletions(-) diff --git a/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_node_container.launch.py b/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_node_container.launch.py index 41e80c49210d4..d994e871b9481 100644 --- a/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_node_container.launch.py +++ b/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_node_container.launch.py @@ -62,9 +62,9 @@ def create_traffic_light_node_container(namespace, context, *args, **kwargs): camera_arguments = { "input/image": f"/sensing/camera/{namespace}/image_raw", "output/rois": f"/perception/traffic_light_recognition/{namespace}/detection/rois", - "output/traffic_signals": f"/perception/traffic_light_recognition/{namespace}/classification/traffic_signals", - "output/car/traffic_signals": f"/perception/traffic_light_recognition/{namespace}/classification/car/traffic_signals", - "output/pedestrian/traffic_signals": f"/perception/traffic_light_recognition/{namespace}/classification/pedestrian/traffic_signals", + "output/traffic_lights": f"/perception/traffic_light_recognition/{namespace}/classification/traffic_lights", + "output/car/traffic_lights": f"/perception/traffic_light_recognition/{namespace}/classification/car/traffic_lights", + "output/pedestrian/traffic_lights": f"/perception/traffic_light_recognition/{namespace}/classification/pedestrian/traffic_lights", } def create_parameter_dict(*args): @@ -99,8 +99,8 @@ def create_parameter_dict(*args): namespace="classification", parameters=[car_traffic_light_classifier_model_param], remappings=[ - ("~/input/image", LaunchConfiguration("input/image")), - ("~/input/rois", LaunchConfiguration("output/rois")), + ("~/input/image",camera_arguments["input/image"]), + ("~/input/rois", camera_arguments["output/rois"]), ("~/output/traffic_lights", "classified/car/traffic_lights"), ], extra_arguments=[ @@ -114,8 +114,8 @@ def create_parameter_dict(*args): namespace="classification", parameters=[pedestrian_traffic_light_classifier_model_param], remappings=[ - ("~/input/image", LaunchConfiguration("input/image")), - ("~/input/rois", LaunchConfiguration("output/rois")), + ("~/input/image", camera_arguments["input/image"]), + ("~/input/rois", camera_arguments["output/rois"]), ("~/output/traffic_lights", "classified/pedestrian/traffic_lights"), ], extra_arguments=[ @@ -133,7 +133,7 @@ def create_parameter_dict(*args): ("~/input/rough/rois", "detection/rough/rois"), ( "~/input/traffic_lights", - LaunchConfiguration("output/traffic_lights"), + camera_arguments["output/traffic_lights"], ), ("~/output/image", "debug/rois"), ("~/output/image/compressed", "debug/rois/compressed"), @@ -217,19 +217,6 @@ def add_launch_arg(name: str, default_value=None, description=None): add_launch_arg("enable_image_decompressor", "True") add_launch_arg("enable_fine_detection", "True") add_launch_arg("use_image_transport", "True") - add_launch_arg("input/image", "/sensing/camera/traffic_light/image_raw") - add_launch_arg("output/rois", "/perception/traffic_light_recognition/rois") - add_launch_arg( - "output/traffic_lights", - "/perception/traffic_light_recognition/traffic_signals", - ) - add_launch_arg( - "output/car/traffic_lights", "/perception/traffic_light_recognition/car/traffic_lights" - ) - add_launch_arg( - "output/pedestrian/traffic_lights", - "/perception/traffic_light_recognition/pedestrian/traffic_lights", - ) # traffic_light_fine_detector add_launch_arg( diff --git a/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_occlusion_predictor.launch.py b/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_occlusion_predictor.launch.py index b4a95165758b7..16dda3ae8c99d 100644 --- a/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_occlusion_predictor.launch.py +++ b/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_occlusion_predictor.launch.py @@ -33,9 +33,9 @@ def create_traffic_light_occlusion_predictor(namespace): "input/camera_info": f"/sensing/camera/{namespace}/camera_info", "input/cloud": LaunchConfiguration("input/cloud"), "input/rois": f"/perception/traffic_light_recognition/{namespace}/detection/rois", - "input/car/traffic_signals": "classified/car/traffic_signals", - "input/pedestrian/traffic_signals": "classified/pedestrian/traffic_signals", - "output/traffic_signals": f"/perception/traffic_light_recognition/{namespace}/classification/traffic_signals", + "input/car/traffic_lights": "classified/car/traffic_lights", + "input/pedestrian/traffic_lights": "classified/pedestrian/traffic_lights", + "output/traffic_lights": f"/perception/traffic_light_recognition/{namespace}/classification/traffic_lights", }.items() group = GroupAction( From dac207e7fe65e5c114193c5efdd0fd2b1c24dd9e Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Fri, 6 Dec 2024 05:34:38 +0000 Subject: [PATCH 31/37] style(pre-commit): autofix --- .../traffic_light_node_container.launch.py | 2 +- .../README.md | 21 ++++++++------- .../README.md | 26 ++++++++++--------- 3 files changed, 27 insertions(+), 22 deletions(-) diff --git a/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_node_container.launch.py b/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_node_container.launch.py index d994e871b9481..74b577c701f1d 100644 --- a/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_node_container.launch.py +++ b/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_node_container.launch.py @@ -99,7 +99,7 @@ def create_parameter_dict(*args): namespace="classification", parameters=[car_traffic_light_classifier_model_param], remappings=[ - ("~/input/image",camera_arguments["input/image"]), + ("~/input/image", camera_arguments["input/image"]), ("~/input/rois", camera_arguments["output/rois"]), ("~/output/traffic_lights", "classified/car/traffic_lights"), ], diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index 78f731e5c30ba..976b141866b65 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -9,18 +9,21 @@ ### Input <<<<<<< HEAD -| Name | Type | Description | -| ------------------------------------ | ------------------------------------------------ | ------------------ | -| `~/input/vector_map` | `autoware_map_msgs::msg::LaneletMapBin` | vector map | -| `~/input/route` | `autoware_planning_msgs::msg::LaneletRoute` | route (optional) | -| `~/input/classified/traffic_lights` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | classified signals | -======= + | Name | Type | Description | | ----------------------------------- | ------------------------------------------------------- | ------------------ | | `~/input/vector_map` | `autoware_map_msgs::msg::LaneletMapBin` | vector map | | `~/input/route` | `autoware_planning_msgs::msg::LaneletRoute` | route (optional) | | `~/input/classified/traffic_lights` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | classified signals | ->>>>>>> 200573ce766fc67663ffc721632f6520d392a775 + +======= +| Name | Type | Description | +| ----------------------------------- | ------------------------------------------------------- | ------------------ | +| `~/input/vector_map` | `autoware_map_msgs::msg::LaneletMapBin` | vector map | +| `~/input/route` | `autoware_planning_msgs::msg::LaneletRoute` | route (optional) | +| `~/input/classified/traffic_lights` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | classified signals | + +> > > > > > > 200573ce766fc67663ffc721632f6520d392a775 ### Output @@ -33,8 +36,8 @@ | Name | Type | Description | Default value | | :---------------------------- | :------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------ | | `use_last_detect_color` | `bool` | If this parameter is `true`, this module estimates pedestrian's traffic signal as RED not only when vehicle's traffic signal is detected as GREEN/AMBER but also when detection results change GREEN/AMBER to UNKNOWN. (If detection results change RED or AMBER to UNKNOWN, this module estimates pedestrian's traffic signal as UNKNOWN.) If this parameter is `false`, this module use only latest detection results for estimation. (Only when the detection result is GREEN/AMBER, this module estimates pedestrian's traffic signal as RED.) | `true` | -| `last_detect_color_hold_time` | `double` | The time threshold to hold for last detected vehicle traffic light color. The unit is second. | `2.0` | -| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedestrian traffic light color. The unit is second. | +| `last_detect_color_hold_time` | `double` | The time threshold to hold for last detected vehicle traffic light color. The unit is second. | `2.0` | +| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedestrian traffic light color. The unit is second. | ## Inner-workings / Algorithms diff --git a/perception/autoware_traffic_light_classifier/README.md b/perception/autoware_traffic_light_classifier/README.md index cd0b4ec4a9d21..fd864b4cf27dd 100644 --- a/perception/autoware_traffic_light_classifier/README.md +++ b/perception/autoware_traffic_light_classifier/README.md @@ -12,17 +12,19 @@ Traffic light labels are classified by EfficientNet-b1 or MobileNet-v2. We trained classifiers for vehicular signals and pedestrian signals separately. For vehicular signals, a total of 83400 (58600 for training, 14800 for evaluation and 10000 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. The information of the models is listed here: -| Name | Input Size | Test Accuracy | + +| Name | Input Size | Test Accuracy | | --------------- | ---------- | ------------- | -| EfficientNet-b1 | 128 x 128 | 99.76% | -| MobileNet-v2 | 224 x 224 | 99.81% | +| EfficientNet-b1 | 128 x 128 | 99.76% | +| MobileNet-v2 | 224 x 224 | 99.81% | For pedestrian signals, a total of 21199 (17860 for training, 2114 for evaluation and 1225 for test) TIER IV internal images of Japanese traffic lights were used for fine-tuning. The information of the models is listed here: -| Name | Input Size | Test Accuracy | + +| Name | Input Size | Test Accuracy | | --------------- | ---------- | ------------- | -| EfficientNet-b1 | 128 x 128 | 97.89% | -| MobileNet-v2 | 224 x 224 | 99.10% | +| EfficientNet-b1 | 128 x 128 | 97.89% | +| MobileNet-v2 | 224 x 224 | 99.10% | ### hsv_classifier @@ -59,12 +61,12 @@ These colors and shapes are assigned to the message as follows: ### Node Parameters -| Name | Type | Description | -| ----------------------------- | ----- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -| `classifier_type` | int | if the value is `1`, cnn_classifier is used | -| `data_path` | str | packages data and artifacts directory path | -| `backlight_threshold` | float | If the intensity of light is grater than this threshold, the class of the corresponding RoI will be overwritten with UNKNOWN, and the confidence of the overwritten signal will be set to `0.0`. The value should be set in the range of `[0.0, 1.0]`. If you wouldn't like to use this feature, please set it to `1.0`. | -| `classify_traffic_light_type` | int | if the value is `0`, the classifier classifies the vehicular signals. if the value is `1`, it classifies the pedestrian signals. | +| Name | Type | Description | +| ----------------------------- | ----- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | +| `classifier_type` | int | if the value is `1`, cnn_classifier is used | +| `data_path` | str | packages data and artifacts directory path | +| `backlight_threshold` | float | If the intensity of light is grater than this threshold, the class of the corresponding RoI will be overwritten with UNKNOWN, and the confidence of the overwritten signal will be set to `0.0`. The value should be set in the range of `[0.0, 1.0]`. If you wouldn't like to use this feature, please set it to `1.0`. | +| `classify_traffic_light_type` | int | if the value is `0`, the classifier classifies the vehicular signals. if the value is `1`, it classifies the pedestrian signals. | ### Core Parameters From dbfab6ece57adedb43cfcc5d28a30ddd96b10b81 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Fri, 6 Dec 2024 05:40:02 +0000 Subject: [PATCH 32/37] style(pre-commit): autofix --- .../autoware_crosswalk_traffic_light_estimator/README.md | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index 976b141866b65..e512ab5757e94 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -17,10 +17,11 @@ | `~/input/classified/traffic_lights` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | classified signals | ======= -| Name | Type | Description | + +| Name | Type | Description | | ----------------------------------- | ------------------------------------------------------- | ------------------ | -| `~/input/vector_map` | `autoware_map_msgs::msg::LaneletMapBin` | vector map | -| `~/input/route` | `autoware_planning_msgs::msg::LaneletRoute` | route (optional) | +| `~/input/vector_map` | `autoware_map_msgs::msg::LaneletMapBin` | vector map | +| `~/input/route` | `autoware_planning_msgs::msg::LaneletRoute` | route (optional) | | `~/input/classified/traffic_lights` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | classified signals | > > > > > > > 200573ce766fc67663ffc721632f6520d392a775 From 699174a3cf166e8e23f13b2d39b2f63c7511dfeb Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Fri, 6 Dec 2024 14:52:50 +0900 Subject: [PATCH 33/37] fix: precommit Signed-off-by: tzhong518 --- .../traffic_light_node_container.launch.py | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_node_container.launch.py b/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_node_container.launch.py index d994e871b9481..74b577c701f1d 100644 --- a/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_node_container.launch.py +++ b/launch/tier4_perception_launch/launch/traffic_light_recognition/traffic_light_node_container.launch.py @@ -99,7 +99,7 @@ def create_parameter_dict(*args): namespace="classification", parameters=[car_traffic_light_classifier_model_param], remappings=[ - ("~/input/image",camera_arguments["input/image"]), + ("~/input/image", camera_arguments["input/image"]), ("~/input/rois", camera_arguments["output/rois"]), ("~/output/traffic_lights", "classified/car/traffic_lights"), ], From 7f871e4b15077f9fdc30a7c03ad6f4e70b477b91 Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Fri, 6 Dec 2024 14:54:53 +0900 Subject: [PATCH 34/37] fix: precommit Signed-off-by: tzhong518 --- .pre-commit-config.yaml | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/.pre-commit-config.yaml b/.pre-commit-config.yaml index 63dc504f61a2b..cd3dc65206dcf 100644 --- a/.pre-commit-config.yaml +++ b/.pre-commit-config.yaml @@ -3,7 +3,7 @@ ci: repos: - repo: https://github.com/pre-commit/pre-commit-hooks - rev: v4.6.0 + rev: v5.0.0 hooks: - id: check-json - id: check-merge-conflict From 0e36360d014dbfe447caf14faefacb2fc41fb2be Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Tue, 10 Dec 2024 08:17:17 +0900 Subject: [PATCH 35/37] fix: test Signed-off-by: tzhong518 --- .../test/test_node.cpp | 16 +++++++--------- 1 file changed, 7 insertions(+), 9 deletions(-) diff --git a/perception/autoware_traffic_light_arbiter/test/test_node.cpp b/perception/autoware_traffic_light_arbiter/test/test_node.cpp index f993ad7cec84d..7e0e29541cb53 100644 --- a/perception/autoware_traffic_light_arbiter/test/test_node.cpp +++ b/perception/autoware_traffic_light_arbiter/test/test_node.cpp @@ -175,9 +175,8 @@ TEST(TrafficLightArbiterTest, testTrafficSignalOnlyPerceptionMsg) { rclcpp::init(0, nullptr); const std::string input_map_topic = "/traffic_light_arbiter/sub/vector_map"; - const std::string input_perception_topic = - "/traffic_light_arbiter/sub/perception_traffic_signals"; - const std::string output_topic = "/traffic_light_arbiter/pub/traffic_signals"; + const std::string input_perception_topic = "/traffic_light_arbiter/sub/perception_traffic_lights"; + const std::string output_topic = "/traffic_light_arbiter/pub/traffic_lights"; auto test_manager = generateTestManager(); auto test_target_node = generateNode(); @@ -209,8 +208,8 @@ TEST(TrafficLightArbiterTest, testTrafficSignalOnlyExternalMsg) { rclcpp::init(0, nullptr); const std::string input_map_topic = "/traffic_light_arbiter/sub/vector_map"; - const std::string input_external_topic = "/traffic_light_arbiter/sub/external_traffic_signals"; - const std::string output_topic = "/traffic_light_arbiter/pub/traffic_signals"; + const std::string input_external_topic = "/traffic_light_arbiter/sub/external_traffic_lights"; + const std::string output_topic = "/traffic_light_arbiter/pub/traffic_lights"; auto test_manager = generateTestManager(); auto test_target_node = generateNode(); @@ -242,10 +241,9 @@ TEST(TrafficLightArbiterTest, testTrafficSignalBothMsg) { rclcpp::init(0, nullptr); const std::string input_map_topic = "/traffic_light_arbiter/sub/vector_map"; - const std::string input_perception_topic = - "/traffic_light_arbiter/sub/perception_traffic_signals"; - const std::string input_external_topic = "/traffic_light_arbiter/sub/external_traffic_signals"; - const std::string output_topic = "/traffic_light_arbiter/pub/traffic_signals"; + const std::string input_perception_topic = "/traffic_light_arbiter/sub/perception_traffic_lights"; + const std::string input_external_topic = "/traffic_light_arbiter/sub/external_traffic_lights"; + const std::string output_topic = "/traffic_light_arbiter/pub/traffic_lights"; auto test_manager = generateTestManager(); auto test_target_node = generateNode(); From 6e269bcad4fc9f75acea97b64ca70303f795dbaa Mon Sep 17 00:00:00 2001 From: tzhong518 Date: Tue, 10 Dec 2024 08:29:11 +0900 Subject: [PATCH 36/37] fix: readme Signed-off-by: tzhong518 --- .../README.md | 18 +++--------------- 1 file changed, 3 insertions(+), 15 deletions(-) diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index e512ab5757e94..bf8e5cf9373ad 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -8,24 +8,12 @@ ### Input -<<<<<<< HEAD - -| Name | Type | Description | -| ----------------------------------- | ------------------------------------------------------- | ------------------ | -| `~/input/vector_map` | `autoware_map_msgs::msg::LaneletMapBin` | vector map | -| `~/input/route` | `autoware_planning_msgs::msg::LaneletRoute` | route (optional) | -| `~/input/classified/traffic_lights` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | classified signals | - -======= - | Name | Type | Description | | ----------------------------------- | ------------------------------------------------------- | ------------------ | | `~/input/vector_map` | `autoware_map_msgs::msg::LaneletMapBin` | vector map | -| `~/input/route` | `autoware_planning_msgs::msg::LaneletRoute` | route (optional) | +| `~/input/route` | `autoware_planning_msgs::msg::LaneletRoute` | route | | `~/input/classified/traffic_lights` | `autoware_perception_msgs::msg::TrafficLightGroupArray` | classified signals | -> > > > > > > 200573ce766fc67663ffc721632f6520d392a775 - ### Output | Name | Type | Description | @@ -37,8 +25,8 @@ | Name | Type | Description | Default value | | :---------------------------- | :------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------ | | `use_last_detect_color` | `bool` | If this parameter is `true`, this module estimates pedestrian's traffic signal as RED not only when vehicle's traffic signal is detected as GREEN/AMBER but also when detection results change GREEN/AMBER to UNKNOWN. (If detection results change RED or AMBER to UNKNOWN, this module estimates pedestrian's traffic signal as UNKNOWN.) If this parameter is `false`, this module use only latest detection results for estimation. (Only when the detection result is GREEN/AMBER, this module estimates pedestrian's traffic signal as RED.) | `true` | -| `last_detect_color_hold_time` | `double` | The time threshold to hold for last detected vehicle traffic light color. The unit is second. | `2.0` | -| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedestrian traffic light color. The unit is second. | +| `last_detect_color_hold_time` | `double` | The time threshold to hold for last detect color. | `2.0` | +| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedestrian traffic light color. The unit is second. |`1.0` | ## Inner-workings / Algorithms From 3333b540a0122f02c346ff91ecd6a93fc0da51c5 Mon Sep 17 00:00:00 2001 From: "pre-commit-ci[bot]" <66853113+pre-commit-ci[bot]@users.noreply.github.com> Date: Mon, 9 Dec 2024 23:31:51 +0000 Subject: [PATCH 37/37] style(pre-commit): autofix --- perception/autoware_crosswalk_traffic_light_estimator/README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/perception/autoware_crosswalk_traffic_light_estimator/README.md b/perception/autoware_crosswalk_traffic_light_estimator/README.md index bf8e5cf9373ad..82aa12c124029 100644 --- a/perception/autoware_crosswalk_traffic_light_estimator/README.md +++ b/perception/autoware_crosswalk_traffic_light_estimator/README.md @@ -26,7 +26,7 @@ | :---------------------------- | :------- | :------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | :------------ | | `use_last_detect_color` | `bool` | If this parameter is `true`, this module estimates pedestrian's traffic signal as RED not only when vehicle's traffic signal is detected as GREEN/AMBER but also when detection results change GREEN/AMBER to UNKNOWN. (If detection results change RED or AMBER to UNKNOWN, this module estimates pedestrian's traffic signal as UNKNOWN.) If this parameter is `false`, this module use only latest detection results for estimation. (Only when the detection result is GREEN/AMBER, this module estimates pedestrian's traffic signal as RED.) | `true` | | `last_detect_color_hold_time` | `double` | The time threshold to hold for last detect color. | `2.0` | -| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedestrian traffic light color. The unit is second. |`1.0` | +| `last_colors_hold_time` | `double` | The time threshold to hold for history detected pedestrian traffic light color. The unit is second. | `1.0` | ## Inner-workings / Algorithms