diff --git a/Docs/3rd_party_integrations.md b/Docs/3rd_party_integrations.md
index 87e5133e849..3a4b1552510 100644
--- a/Docs/3rd_party_integrations.md
+++ b/Docs/3rd_party_integrations.md
@@ -10,7 +10,6 @@ CARLA has been developed to integrate with several 3rd party applications in ord
- [__Chrono__](tuto_G_chrono.md)
- [__ASAM OpenDRIVE__](adv_opendrive.md)
- [__PTV Vissim__](adv_ptv.md)
-- [__RSS__](adv_rss.md)
- [__AWS and RLlib__](tuto_G_rllib_integration.md)
---
@@ -63,10 +62,6 @@ Learn how to use CARLA alongside CarSIM [here](tuto_G_carsim_integration.md).
[__ASAM OpenDRIVE__](https://www.asam.net/standards/detail/opendrive/) is an open format specification used to describe the logic of a road network intended to standardise the discription of road networks in digital format and allow different applications to exchange data on road networks. Please refer to the full documentation [__here__](adv_opendrive.md)
-## RSS - Responsibility Sensitive Safety
-
-CARLA integrates the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib) in the client library. This feature allows users to investigate behaviours of RSS without having to implement anything. CARLA will take care of providing the input, and applying the output to the AD systems on the fly. Refer to the full documentation [__here__](adv_rss.md)
-
## AWS and RLlib integration
The RLlib integration brings support between the Ray/RLlib library and CARLA, allowing the easy use of the CARLA environment for training and inference purposes. Ray is an open source framework that provides a simple, universal API for building distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library. Read more about operating CARLA on AWS and RLlib [__here__](tuto_G_rllib_integration.md).
diff --git a/Docs/adv_rss.md b/Docs/adv_rss.md
deleted file mode 100644
index e357822d4a2..00000000000
--- a/Docs/adv_rss.md
+++ /dev/null
@@ -1,130 +0,0 @@
-# RSS
-
-CARLA integrates the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib) in the client library. This feature allows users to investigate behaviours of RSS without having to implement anything. CARLA will take care of providing the input, and applying the output to the AD systems on the fly.
-
-* [__Overview__](#overview)
-* [__Compilation__](#compilation)
- * [Dependencies](#dependencies)
- * [Build](#build)
-* [__Current state__](#current-state)
- * [RssSensor](#rsssensor)
- * [RssRestrictor](#rssrestrictor)
-
-!!! Important
- This feature is a work in progress. Right now, it is only available for the Linux build.
-
----
-## Overview
-
-The RSS library implements a mathematical model for safety assurance. It receives sensor information, and provides restrictions to the controllers of a vehicle. To sum up, the RSS module uses the sensor data to define __situations__. A situation describes the state of the ego vehicle with an element of the environment. For each situation, safety checks are made, and a proper response is calculated. The overall response is the result of all of the combined. For specific information on the library, read the [documentation](https://intel.github.io/ad-rss-lib/), especially the [Background section](https://intel.github.io/ad-rss-lib/ad_rss/Overview/).
-
-This is implemented in CARLA using two elements.
-
-* __RssSensor__ is in charge of the situation analysis, and response generation using the *ad-rss-lib*.
-* __RssRestrictor__ applies the response by restricting the commands of the vehicle.
-
-The following image sketches the integration of __RSS__ into the CARLA architecture.
-
-![Interate RSS into CARLA](img/rss_carla_integration_architecture.png)
-
-__1. The server.__
-
-- Sends a camera image to the client. (Only if the client needs visualization).
-- Provides the RssSensor with world data.
-- Sends a physics model of the vehicle to the RssRestrictor. (Only if the default values are overwritten).
-
-__2. The client.__
-
-- Provides the *RssSensor* with some [parameters](https://intel.github.io/ad-rss-lib/ad_rss/Appendix-ParameterDiscussion/) to be considered.
--Sends to the *RssResrictor* an initial [carla.VehicleControl](python_api.md#carla.VehicleControl).
-
-__3. The RssSensor.__
-
-- Uses the *ad-rss-lib* to extract situations, do safety checks, and generate a response.
-- Sends the *RssRestrictor* a response containing the proper response and aceleration restrictions to be applied.
-
-__4. The RssRestrictor__
-
-- If the client asks for it, applies the response to the [carla.VehicleControl](python_api.md#carla.VehicleControl), and returns the resulting one.
-
-[![RSS sensor in CARLA](img/rss_carla_integration.png)](https://www.youtube.com/watch?v=UxKPXPT2T8Q)
-
Visualization of the RssSensor results.
-
----
-## Compilation
-
-The RSS integration has to be built aside from the rest of CARLA. The __ad-rss-lib__ comes with an LGPL-2.1 open-source license that creates conflict. It has to be linked statically into *libCarla*.
-
-As a reminder, the feature is only available for the Linux build so far.
-
-### Dependencies
-
-There are additional prerequisites required for building RSS and its dependencies. Take a look at the [official documentation](https://intel.github.io/ad-rss-lib/BUILDING) to know more about this.
-
-Dependencies provided by Ubunutu (>= 16.04).
-```sh
-sudo apt-get install libgtest-dev libpython-dev libpugixml-dev libtbb-dev
-```
-
-The dependencies are built using [colcon](https://colcon.readthedocs.io/en/released/user/installation.html), so it has to be installed.
-```sh
-pip3 install --user -U colcon-common-extensions
-```
-
-There are some additional dependencies for the Python bindings.
-```sh
-sudo apt-get install castxml
-pip3 install --user pygccxml pyplusplus
-```
-
-### Build
-
-Once this is done, the full set of dependencies and RSS components can be built.
-
-* Compile LibCarla to work with RSS.
-
-```sh
-make LibCarla.client.rss
-```
-
-* Compile the PythonAPI to include the RSS feature.
-
-```sh
-make PythonAPI.rss
-```
-
-* As an alternative, a package can be built directly.
-```sh
-make package.rss
-```
-
----
-## Current state
-
-### RssSensor
-
-[__carla.RssSensor__](python_api.md#carla.RssSensor) supports [ad-rss-lib v4.2.0 feature set](https://intel.github.io/ad-rss-lib/RELEASE_NOTES_AND_DISCLAIMERS) completely, including intersections, [stay on road](https://intel.github.io/ad-rss-lib/ad_rss_map_integration/HandleRoadBoundaries/) support and [unstructured constellations (e.g. with pedestrians)](https://intel.github.io/ad-rss-lib/ad_rss/UnstructuredConstellations/).
-
-So far, the server provides the sensor with ground truth data of the surroundings that includes the state of other traffic participants and traffic lights.
-
-### RssRestrictor
-
-When the client calls for it, the [__carla.RssRestrictor__](python_api.md#carla.RssRestrictor) will modify the vehicle controller to best reach the desired accelerations or decelerations by a given response.
-
-Due to the stucture of [carla.VehicleControl](python_api.md#carla.VehicleControl) objects, the restrictions applied have certain limitations. These controllers include `throttle`, `brake` and `streering` values. However, due to car physics and the simple control options these might not be met. The restriction intervenes in lateral direction simply by counter steering towards the parallel lane direction. The brake will be activated if deceleration requested by RSS. This depends on vehicle mass and brake torques provided by the [carla.Vehicle](python_api.md#carla.Vehicle).
-
-!!! Note
- In an automated vehicle controller it might be possible to adapt the planned trajectory to the restrictions. A fast control loop (>1KHz) can be used to ensure these are followed.
-
----
-
-That sets the basics regarding the RSS sensor in CARLA. Find out more about the specific attributes and parameters in the [sensor reference](ref_sensors.md#rss-sensor).
-
-Open CARLA and mess around for a while. If there are any doubts, feel free to post these in the forum.
-
-
diff --git a/Docs/bp_library.md b/Docs/bp_library.md
index 416b5aa7fb1..7536df1a54b 100755
--- a/Docs/bp_library.md
+++ b/Docs/bp_library.md
@@ -255,9 +255,6 @@ Check out the [introduction to blueprints](core_actors.md).
- `role_name` (_String_) _- Modifiable_
- `sensor_tick` (_Float_) _- Modifiable_
- `vertical_fov` (_Float_) _- Modifiable_
-- **sensor.other.rss**
- - **Attributes:**
- - `role_name` (_String_) _- Modifiable_
### static
- **static.prop.advertisement**
diff --git a/Docs/core_concepts.md b/Docs/core_concepts.md
index 895647273af..bc4cc5ddff0 100644
--- a/Docs/core_concepts.md
+++ b/Docs/core_concepts.md
@@ -57,7 +57,6 @@ A sensor is an actor attached to a parent vehicle. It follows the vehicle around
* Lane invasion detector.
* Obstacle detector.
* Radar.
-* RSS.
---
## Advanced steps
@@ -68,7 +67,6 @@ CARLA offers a wide range of features that go beyond the scope of this introduct
* [__PTV-Vissim co-simulation__](adv_ptv.md). Run a synchronous simulation between CARLA and PTV-Vissim traffic simulator.
* [__Recorder__](adv_recorder.md). Saves snapshots of the simulation state to reenact a simulation with exact precision.
* [__Rendering options__](adv_rendering_options.md). Graphics quality settings, off-screen rendering and a no-rendering mode.
-* [__RSS__](adv_rss.md). Integration of the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib) to modify a vehicle's trajectory using safety checks.
* [__Simulation time and synchrony__](adv_synchrony_timestep.md). Everything regarding the simulation time and server-client communication.
* [__SUMO co-simulation__](adv_sumo.md). Run a synchronous simulation between CARLA and SUMO traffic simulator.
* [__Traffic manager__](adv_traffic_manager.md). This module is in charge of every vehicle set to autopilot mode. It simulates traffic in the city for the simulation to look like a real urban environment.
diff --git a/Docs/core_sensors.md b/Docs/core_sensors.md
index 0a3ab237cf5..4ed6dc0c380 100644
--- a/Docs/core_sensors.md
+++ b/Docs/core_sensors.md
@@ -152,7 +152,6 @@ Different functionalities such as navigation, measurement of physical properties
| [IMU](ref_sensors.md#imu-sensor) | [carla.IMUMeasurement](<../python_api#carlaimumeasurement>) | Comprises an accelerometer, a gyroscope, and a compass. |
| [LIDAR](ref_sensors.md#lidar-sensor) | [carla.LidarMeasurement](<../python_api#carlalidarmeasurement>) | A rotating LIDAR. Generates a 4D point cloud with coordinates and intensity per point to model the surroundings. |
| [Radar](ref_sensors.md#radar-sensor) | [carla.RadarMeasurement](<../python_api#carlaradarmeasurement>) | 2D point map modelling elements in sight and their movement regarding the sensor. |
-| [RSS](ref_sensors.md#rss-sensor) | [carla.RssResponse](<../python_api#carlarssresponse>) | Modifies the controller applied to a vehicle according to safety checks. This sensor works in a different manner than the rest, and there is specific [RSS documentation](<../adv_rss>) for it. |
| [Semantic LIDAR](ref_sensors.md#semantic-lidar-sensor) | [carla.SemanticLidarMeasurement](<../python_api#carlasemanticlidarmeasurement>) | A rotating LIDAR. Generates a 3D point cloud with extra information regarding instance and semantic segmentation. |
diff --git a/Docs/index.md b/Docs/index.md
index efa4a5b4f9e..9a092b824ff 100644
--- a/Docs/index.md
+++ b/Docs/index.md
@@ -52,7 +52,6 @@ CARLA forum
[__Chrono__](tuto_G_chrono.md) — Details of the Chrono physics simulation integration with CARLA.
[__OpenDrive__](adv_opendrive.md) — Details of the OpenDrive support in CARLA.
[__PTV-Vissim__](adv_ptv.md) — Details of the co-simulation feature with PTV-Vissim.
-[__RSS__](adv_rss.md) — Details of the Responsibility Sensitive Safety library integration with CARLA.
[__AWS__](tuto_G_rllib_integration) — Details of using RLlib to run CARLA as a distributed application on Amazon Web Services.
[__ANSYS__](ecosys_ansys.md) — Brief overview of how the Ansys Real Time Radar Model was integrated into CARLA.
[__carlaviz — web visualizer__](plugins_carlaviz.md) — Plugin that listens the simulation and shows the scene and some simulation data in a web browser.
diff --git a/Docs/python_api.md b/Docs/python_api.md
index 02fc7fb28fe..901b3fd1204 100644
--- a/Docs/python_api.md
+++ b/Docs/python_api.md
@@ -2039,212 +2039,6 @@ Parses the axis' orientations to string.
---
-## carla.RssActorConstellationData
-Data structure that is provided within the callback registered by RssSensor.register_actor_constellation_callback().
-
-### Instance Variables
-- **ego_match_object** (_ad.map.match.Object_)
-The ego map matched information.
-- **ego_route** (_ad.map.route.FullRoute_)
-The ego route.
-- **ego_dynamics_on_route** (_[carla.RssEgoDynamicsOnRoute](#carla.RssEgoDynamicsOnRoute)_)
-Current ego vehicle dynamics regarding the route.
-- **other_match_object** (_ad.map.match.Object_)
-The other object's map matched information. This is only valid if 'other_actor' is not 'None'.
-- **other_actor** (_[carla.Actor](#carla.Actor)_)
-The other actor. This is 'None' in case of query of default parameters or articial objects of kind ad.rss.world.ObjectType.ArtificialObject with no dedicated '[carla.Actor](#carla.Actor)' (as e.g. for the [road boundaries](ref_sensors.md#rss-sensor) at the moment).
-
-### Methods
-
-##### Dunder methods
-- **\__str__**(**self**)
-
----
-
-## carla.RssActorConstellationResult
-Data structure that should be returned by the callback registered by RssSensor.register_actor_constellation_callback().
-
-### Instance Variables
-- **rss_calculation_mode** (_ad.rss.map.RssMode_)
-The calculation mode to be applied with the actor.
-- **restrict_speed_limit_mode** (_ad.rss.map.RestrictSpeedLimitMode_)
-The mode for restricting speed limit.
-- **ego_vehicle_dynamics** (_ad.rss.world.RssDynamics_)
-The RSS dynamics to be applied for the ego vehicle.
-- **actor_object_type** (_ad.rss.world.ObjectType_)
-The RSS object type to be used for the actor.
-- **actor_dynamics** (_ad.rss.world.RssDynamics_)
-The RSS dynamics to be applied for the actor.
-
-### Methods
-
-##### Dunder methods
-- **\__str__**(**self**)
-
----
-
-## carla.RssEgoDynamicsOnRoute
-Part of the data contained inside a [carla.RssResponse](#carla.RssResponse) describing the state of the vehicle. The parameters include its current dynamics, and how it is heading regarding the target route.
-
-### Instance Variables
-- **ego_speed** (_ad.physics.Speed_)
-The ego vehicle's speed.
-- **min_stopping_distance** (_ad.physics.Distance_)
-The current minimum stopping distance.
-- **ego_center** (_ad.map.point.ENUPoint_)
-The considered enu position of the ego vehicle.
-- **ego_heading** (_ad.map.point.ENUHeading_)
-The considered heading of the ego vehicle.
-- **ego_center_within_route** (_bool_)
-States if the ego vehicle's center is within the route.
-- **crossing_border** (_bool_)
-States if the vehicle is already crossing one of the lane borders.
-- **route_heading** (_ad.map.point.ENUHeading_)
-The considered heading of the route.
-- **route_nominal_center** (_ad.map.point.ENUPoint_)
-The considered nominal center of the current route.
-- **heading_diff** (_ad.map.point.ENUHeading_)
-The considered heading diff towards the route.
-- **route_speed_lat** (_ad.physics.Speed_)
-The ego vehicle's speed component _lat_ regarding the route.
-- **route_speed_lon** (_ad.physics.Speed_)
-The ego vehicle's speed component _lon_ regarding the route.
-- **route_accel_lat** (_ad.physics.Acceleration_)
-The ego vehicle's acceleration component _lat_ regarding the route.
-- **route_accel_lon** (_ad.physics.Acceleration_)
-The ego vehicle's acceleration component _lon_ regarding the route.
-- **avg_route_accel_lat** (_ad.physics.Acceleration_)
-The ego vehicle's acceleration component _lat_ regarding the route smoothened by an average filter.
-- **avg_route_accel_lon** (_ad.physics.Acceleration_)
-The ego acceleration component _lon_ regarding the route smoothened by an average filter.
-
-### Methods
-
-##### Dunder methods
-- **\__str__**(**self**)
-
----
-
-## carla.RssLogLevel
-Enum declaration used in [carla.RssSensor](#carla.RssSensor) to set the log level.
-
-### Instance Variables
-- **trace**
-- **debug**
-- **info**
-- **warn**
-- **err**
-- **critical**
-- **off**
-
----
-
-## carla.RssResponse
-Inherited from _[carla.SensorData](#carla.SensorData)_
-Class that contains the output of a [carla.RssSensor](#carla.RssSensor). This is the result of the RSS calculations performed for the parent vehicle of the sensor.
-
-A [carla.RssRestrictor](#carla.RssRestrictor) will use the data to modify the [carla.VehicleControl](#carla.VehicleControl) of the vehicle.
-
-### Instance Variables
-- **response_valid** (_bool_)
-States if the response is valid. It is __False__ if calculations failed or an exception occured.
-- **proper_response** (_ad.rss.state.ProperResponse_)
-The proper response that the RSS calculated for the vehicle.
-- **rss_state_snapshot** (_ad.rss.state.RssStateSnapshot_)
-Detailed RSS states at the current moment in time.
-- **ego_dynamics_on_route** (_[carla.RssEgoDynamicsOnRoute](#carla.RssEgoDynamicsOnRoute)_)
-Current ego vehicle dynamics regarding the route.
-- **world_model** (_ad.rss.world.WorldModel_)
-World model used for calculations.
-- **situation_snapshot** (_ad.rss.situation.SituationSnapshot_)
-Detailed RSS situations extracted from the world model.
-
-### Methods
-
-##### Dunder methods
-- **\__str__**(**self**)
-
----
-
-## carla.RssRestrictor
-These objects apply restrictions to a [carla.VehicleControl](#carla.VehicleControl). It is part of the CARLA implementation of the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib). This class works hand in hand with a [rss sensor](ref_sensors.md#rss-sensor), which provides the data of the restrictions to be applied.
-
-### Methods
-- **restrict_vehicle_control**(**self**, **vehicle_control**, **proper_response**, **ego_dynamics_on_route**, **vehicle_physics**)
-Applies the safety restrictions given by a [carla.RssSensor](#carla.RssSensor) to a [carla.VehicleControl](#carla.VehicleControl).
- - **Parameters:**
- - `vehicle_control` (_[carla.VehicleControl](#carla.VehicleControl)_) - The input vehicle control to be restricted.
- - `proper_response` (_ad.rss.state.ProperResponse_) - Part of the response generated by the sensor. Contains restrictions to be applied to the acceleration of the vehicle.
- - `ego_dynamics_on_route` (_[carla.RssEgoDynamicsOnRoute](#carla.RssEgoDynamicsOnRoute)_) - Part of the response generated by the sensor. Contains dynamics and heading of the vehicle regarding its route.
- - `vehicle_physics` (_[carla.VehiclePhysicsControl](#carla.VehiclePhysicsControl)_) - The current physics of the vehicle. Used to apply the restrictions properly.
- - **Return:** _[carla.VehicleControl](#carla.VehicleControl)_
-
-##### Setters
-- **set_log_level**(**self**, **log_level**)
-Sets the log level.
- - **Parameters:**
- - `log_level` (_[carla.RssLogLevel](#carla.RssLogLevel)_) - New log level.
-
----
-
-## carla.RssRoadBoundariesMode
-Enum declaration used in [carla.RssSensor](#carla.RssSensor) to enable or disable the [stay on road](https://intel.github.io/ad-rss-lib/ad_rss_map_integration/HandleRoadBoundaries/) feature. In summary, this feature considers the road boundaries as virtual objects. The minimum safety distance check is applied to these virtual walls, in order to make sure the vehicle does not drive off the road.
-
-### Instance Variables
-- **On**
-Enables the _stay on road_ feature.
-- **Off**
-Disables the _stay on road_ feature.
-
----
-
-## carla.RssSensor
-Inherited from _[carla.Sensor](#carla.Sensor)_
-This sensor works a bit differently than the rest. Take look at the [specific documentation](adv_rss.md), and the [rss sensor reference](ref_sensors.md#rss-sensor) to gain full understanding of it.
-
-The RSS sensor uses world information, and a [RSS library](https://github.com/intel/ad-rss-lib) to make safety checks on a vehicle. The output retrieved by the sensor is a [carla.RssResponse](#carla.RssResponse). This will be used by a [carla.RssRestrictor](#carla.RssRestrictor) to modify a [carla.VehicleControl](#carla.VehicleControl) before applying it to a vehicle.
-
-### Instance Variables
-- **ego_vehicle_dynamics** (_ad.rss.world.RssDynamics_)
-States the [RSS parameters](https://intel.github.io/ad-rss-lib/ad_rss/Appendix-ParameterDiscussion/) that the sensor will consider for the ego vehicle if no actor constellation callback is registered.
-- **other_vehicle_dynamics** (_ad.rss.world.RssDynamics_)
-States the [RSS parameters](https://intel.github.io/ad-rss-lib/ad_rss/Appendix-ParameterDiscussion/) that the sensor will consider for the rest of vehicles if no actor constellation callback is registered.
-- **pedestrian_dynamics** (_ad.rss.world.RssDynamics_)
-States the [RSS parameters](https://intel.github.io/ad-rss-lib/ad_rss/Appendix-ParameterDiscussion/) that the sensor will consider for pedestrians if no actor constellation callback is registered.
-- **road_boundaries_mode** (_[carla.RssRoadBoundariesMode](#carla.RssRoadBoundariesMode)_)
-Switches the [stay on road](https://intel.github.io/ad-rss-lib/ad_rss_map_integration/HandleRoadBoundaries/) feature. By default is __Off__.
-- **routing_targets** (_vector<[carla.Transform](#carla.Transform)>_)
-The current list of targets considered to route the vehicle. If no routing targets are defined, a route is generated at random.
-
-### Methods
-- **append_routing_target**(**self**, **routing_target**)
-Appends a new target position to the current route of the vehicle.
- - **Parameters:**
- - `routing_target` (_[carla.Transform](#carla.Transform)_) - New target point for the route. Choose these after the intersections to force the route to take the desired turn.
-- **drop_route**(**self**)
-Discards the current route. If there are targets remaining in **routing_targets**, creates a new route using those. Otherwise, a new route is created at random.
-- **register_actor_constellation_callback**(**self**, **callback**)
-Register a callback to customize a [carla.RssActorConstellationResult](#carla.RssActorConstellationResult). By this callback the settings of RSS parameters are done per actor constellation and the settings (ego_vehicle_dynamics, other_vehicle_dynamics and pedestrian_dynamics) have no effect.
- - **Parameters:**
- - `callback` - The function to be called whenever a RSS situation is about to be calculated.
-- **reset_routing_targets**(**self**)
-Erases the targets that have been appended to the route.
-
-##### Setters
-- **set_log_level**(**self**, **log_level**)
-Sets the log level.
- - **Parameters:**
- - `log_level` (_[carla.RssLogLevel](#carla.RssLogLevel)_) - New log level.
-- **set_map_log_level**(**self**, **log_level**)
-Sets the map log level.
- - **Parameters:**
- - `log_level` (_[carla.RssLogLevel](#carla.RssLogLevel)_) - New map log level.
-
-##### Dunder methods
-- **\__str__**(**self**)
-
----
-
## carla.SemanticLidarDetection
Data contained inside a [carla.SemanticLidarMeasurement](#carla.SemanticLidarMeasurement). Each of these represents one of the points in the cloud with its location, the cosine of the incident angle, index of the object hit, and its semantic tag.
@@ -2312,7 +2106,6 @@ Sensors compound a specific family of actors quite diverse and unique. They are
- [SemanticLidar raycast](ref_sensors.md#semanticlidar-raycast-sensor)
- [Radar](ref_sensors.md#radar-sensor)
- [RGB camera](ref_sensors.md#rgb-camera)
- - [RSS sensor](ref_sensors.md#rss-sensor)
- [Semantic Segmentation camera](ref_sensors.md#semantic-segmentation-camera)
Only receive data when triggered.
- [Collision detector](ref_sensors.md#collision-detector)
@@ -2375,7 +2168,6 @@ Base class for all the objects containing data generated by a [carla.Sensor](#ca
- LIDAR sensor: [carla.LidarMeasurement](#carla.LidarMeasurement).
- Obstacle detector: [carla.ObstacleDetectionEvent](#carla.ObstacleDetectionEvent).
- Radar sensor: [carla.RadarMeasurement](#carla.RadarMeasurement).
- - RSS sensor: [carla.RssResponse](#carla.RssResponse).
- Semantic LIDAR sensor: [carla.SemanticLidarMeasurement](#carla.SemanticLidarMeasurement).
- Cooperative awareness messages V2X sensor: [carla.CAMEvent](#carla.CAMEvent).
- Custom V2X messages V2X sensor: [carla.CustomV2XEvent](#carla.CustomV2XEvent).
diff --git a/Docs/ref_sensors.md b/Docs/ref_sensors.md
index 2c233381f1d..2a523b80fdd 100644
--- a/Docs/ref_sensors.md
+++ b/Docs/ref_sensors.md
@@ -9,7 +9,6 @@
- [__Obstacle detector__](#obstacle-detector)
- [__Radar sensor__](#radar-sensor)
- [__RGB camera__](#rgb-camera)
-- [__RSS sensor__](#rss-sensor)
- [__Semantic LIDAR sensor__](#semantic-lidar-sensor)
- [__Semantic segmentation camera__](#semantic-segmentation-camera)
- [__Instance segmentation camera__](#instance-segmentation-camera)
@@ -494,139 +493,6 @@ Since these effects are provided by UE, please make sure to check their document
| `fov` | float | Horizontal field of view in degrees. |
| `raw_data` | bytes | Array of BGRA 32-bit pixels. |
-
-
----
-## RSS sensor
-
-* __Blueprint:__ sensor.other.rss
-* __Output:__ [carla.RssResponse](python_api.md#carla.RssResponse) per step (unless `sensor_tick` says otherwise).
-
-!!! Important
- It is highly recommended to read the specific [rss documentation](adv_rss.md) before reading this.
-
-This sensor integrates the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib) in CARLA. It is disabled by default in CARLA, and it has to be explicitly built in order to be used.
-
-The RSS sensor calculates the RSS state of a vehicle and retrieves the current RSS Response as sensor data. The [carla.RssRestrictor](python_api.md#carla.RssRestrictor) will use this data to adapt a [carla.VehicleControl](python_api.md#carla.VehicleControl) before applying it to a vehicle.
-
-These controllers can be generated by an *Automated Driving* stack or user input. For instance, hereunder there is a fragment of code from `PythonAPI/examples/rss/manual_control_rss.py`, where the user input is modified using RSS when necessary.
-
-__1.__ Checks if the __RssSensor__ generates a valid response containing restrictions.
-__2.__ Gathers the current dynamics of the vehicle and the vehicle physics.
-__3.__ Applies restrictions to the vehicle control using the response from the RssSensor, and the current dynamics and physicis of the vehicle.
-
-```py
-rss_proper_response = self._world.rss_sensor.proper_response if self._world.rss_sensor and self._world.rss_sensor.response_valid else None
-if rss_proper_response:
-...
- vehicle_control = self._restrictor.restrict_vehicle_control(
- vehicle_control, rss_proper_response, self._world.rss_sensor.ego_dynamics_on_route, self._vehicle_physics)
-```
-
-
-#### The carla.RssSensor class
-
-The blueprint for this sensor has no modifiable attributes. However, the [carla.RssSensor](python_api.md#carla.RssSensor) object that it instantiates has attributes and methods that are detailed in the Python API reference. Here is a summary of them.
-
-| [carla.RssSensor variables](<../python_api#carlarsssensor>) | Type | Description |
-| ---------------------------------------- | ---------------------------------------- | ---------------------------------------- |
-| `ego_vehicle_dynamics` | [ad.rss.world.RssDynamics]() | RSS parameters to be applied for the ego vehicle |
-| `other_vehicle_dynamics` | [ad.rss.world.RssDynamics]() | RSS parameters to be applied for the other vehicles |
-| `pedestrian_dynamics` | [ad.rss.world.RssDynamics]() | RSS parameters to be applied for pedestrians |
-| `road_boundaries_mode` | [carla.RssRoadBoundariesMode](<../python_api#carlarssroadboundariesmode>) | Enables/Disables the [stay on road]() feature. Default is **Off**. |
-
-
-
-
-```py
-# Fragment of rss_sensor.py
-# The carla.RssSensor is updated when listening for a new carla.RssResponse
-def _on_rss_response(weak_self, response):
-...
- self.timestamp = response.timestamp
- self.response_valid = response.response_valid
- self.proper_response = response.proper_response
- self.ego_dynamics_on_route = response.ego_dynamics_on_route
- self.rss_state_snapshot = response.rss_state_snapshot
- self.situation_snapshot = response.situation_snapshot
- self.world_model = response.world_model
-```
-
-!!! Warning
- This sensor works fully on the client side. There is no blueprint in the server. Changes on the attributes will have effect __after__ the *listen()* has been called.
-
-The methods available in this class are related to the routing of the vehicle. RSS calculations are always based on a route of the ego vehicle through the road network.
-
-The sensor allows to control the considered route by providing some key points, which could be the [carla.Transform](python_api.md#carla.Transform) in a [carla.Waypoint](python_api.md#carla.Waypoint). These points are best selected after the intersections to force the route to take the desired turn.
-
-| [carla.RssSensor methods](<../python_api#carlarsssensor>) | Description |
-| ----------------------------------------- | ----------------------------------------- |
-| `routing_targets` | Get the current list of routing targets used for route. |
-| `append_routing_target` | Append an additional position to the current routing targets. |
-| `reset_routing_targets` | Deletes the appended routing targets. |
-| `drop_route` | Discards the current route and creates a new one. |
-| `register_actor_constellation_callback` | Register a callback to customize the calculations. |
-| `set_log_level` | Sets the log level. |
-| `set_map_log_level` | Sets the log level used for map related logs. |
-
-
-
-
-
----
-
-
-```py
-# Update the current route
-self.sensor.reset_routing_targets()
-if routing_targets:
- for target in routing_targets:
- self.sensor.append_routing_target(target)
-```
-
-!!! Note
- If no routing targets are defined, a random route is created.
-
-#### Output attributes
-
-| [carla.RssResponse attributes](<../python_api#carlarssresponse>) | Type | Description |
-| ------------------------------------- | ------------------------------------- | ------------------------------------- |
-| `response_valid` | bool | Validity of the response data. |
-| `proper_response` | [ad.rss.state.ProperResponse]() | Proper response that the RSS calculated for the vehicle including acceleration restrictions. |
-| `rss_state_snapshot` | [ad.rss.state.RssStateSnapshot]() | RSS states at the current point in time. This is the detailed individual output of the RSS calclulations. |
-| `situation_snapshot` | [ad.rss.situation.SituationSnapshot]() | RSS situation at the current point in time. This is the processed input data for the RSS calclulations. |
-| `world_model` | [ad.rss.world.WorldModel]() | RSS world model at the current point in time. This is the input data for the RSS calculations. |
-| `ego_dynamics_on_route` | [carla.RssEgoDynamicsOnRoute](<../python_api#carlarssegodynamicsonroute>) | Current ego vehicle dynamics regarding the route. |
-
-
-In case a actor_constellation_callback is registered, a call is triggered for:
-
-1. default calculation (`actor_constellation_data.other_actor=None`)
-2. per-actor calculation
-
-```py
-# Fragment of rss_sensor.py
-# The function is registered as actor_constellation_callback
-def _on_actor_constellation_request(self, actor_constellation_data):
- actor_constellation_result = carla.RssActorConstellationResult()
- actor_constellation_result.rss_calculation_mode = ad.rss.map.RssMode.NotRelevant
- actor_constellation_result.restrict_speed_limit_mode = ad.rss.map.RssSceneCreation.RestrictSpeedLimitMode.IncreasedSpeedLimit10
- actor_constellation_result.ego_vehicle_dynamics = self.current_vehicle_parameters
- actor_constellation_result.actor_object_type = ad.rss.world.ObjectType.Invalid
- actor_constellation_result.actor_dynamics = self.current_vehicle_parameters
-
- actor_id = -1
- actor_type_id = "none"
- if actor_constellation_data.other_actor != None:
- # customize actor_constellation_result for specific actor
- ...
- else:
- # default
- ...
- return actor_constellation_result
-```
-
-
---
## Semantic LIDAR sensor
diff --git a/Docs/tuto_G_bounding_boxes.md b/Docs/tuto_G_bounding_boxes.md
index 378de40d862..0e95019bf0d 100644
--- a/Docs/tuto_G_bounding_boxes.md
+++ b/Docs/tuto_G_bounding_boxes.md
@@ -128,7 +128,7 @@ bounding_box_set = world.get_level_bbs(carla.CityObjectLabel.TrafficLight)
nearby_bboxes = []
for bbox in bounding_box_set:
if bbox.location.distance(actor.get_transform().location) < 50:
- nearby_bboxes
+ nearby_bboxes.append()
```
This list can be further filtered using actor location to identify objects that are nearby and therefore likely to be within the field of view of a camera attached to an actor.
diff --git a/PythonAPI/docs/sensor.yml b/PythonAPI/docs/sensor.yml
index 5885217cd0f..8a2f87baaf8 100644
--- a/PythonAPI/docs/sensor.yml
+++ b/PythonAPI/docs/sensor.yml
@@ -17,7 +17,6 @@
- [SemanticLidar raycast](ref_sensors.md#semanticlidar-raycast-sensor)
- [Radar](ref_sensors.md#radar-sensor)
- [RGB camera](ref_sensors.md#rgb-camera)
- - [RSS sensor](ref_sensors.md#rss-sensor)
- [Semantic Segmentation camera](ref_sensors.md#semantic-segmentation-camera)
Only receive data when triggered.
- [Collision detector](ref_sensors.md#collision-detector)
@@ -111,154 +110,4 @@
- def_name: __str__
# --------------------------------------
- - class_name: RssSensor
- parent: carla.Sensor
- # - DESCRIPTION ------------------------
- doc: >
- This sensor works a bit differently than the rest. Take look at the [specific documentation](adv_rss.md), and the [rss sensor reference](ref_sensors.md#rss-sensor) to gain full understanding of it.
-
-
- The RSS sensor uses world information, and a [RSS library](https://github.com/intel/ad-rss-lib) to make safety checks on a vehicle. The output retrieved by the sensor is a carla.RssResponse. This will be used by a carla.RssRestrictor to modify a carla.VehicleControl before applying it to a vehicle.
- # - PROPERTIES -------------------------
- instance_variables:
- - var_name: ego_vehicle_dynamics
- type: ad.rss.world.RssDynamics
- doc: >
- States the [RSS parameters](https://intel.github.io/ad-rss-lib/ad_rss/Appendix-ParameterDiscussion/) that the sensor will consider for the ego vehicle if no actor constellation callback is registered.
- - var_name: other_vehicle_dynamics
- type: ad.rss.world.RssDynamics
- doc: >
- States the [RSS parameters](https://intel.github.io/ad-rss-lib/ad_rss/Appendix-ParameterDiscussion/) that the sensor will consider for the rest of vehicles if no actor constellation callback is registered.
- - var_name: pedestrian_dynamics
- type: ad.rss.world.RssDynamics
- doc: >
- States the [RSS parameters](https://intel.github.io/ad-rss-lib/ad_rss/Appendix-ParameterDiscussion/) that the sensor will consider for pedestrians if no actor constellation callback is registered.
- - var_name: road_boundaries_mode
- type: carla.RssRoadBoundariesMode
- doc: >
- Switches the [stay on road](https://intel.github.io/ad-rss-lib/ad_rss_map_integration/HandleRoadBoundaries/) feature. By default is __Off__.
- - var_name: routing_targets
- type: vector
- doc: >
- The current list of targets considered to route the vehicle. If no routing targets are defined, a route is generated at random.
- # - METHODS ----------------------------
- methods:
- - def_name: append_routing_target
- params:
- - param_name: routing_target
- type: carla.Transform
- doc: >
- New target point for the route. Choose these after the intersections to force the route to take the desired turn.
- doc: >
- Appends a new target position to the current route of the vehicle.
- - def_name: reset_routing_targets
- doc: >
- Erases the targets that have been appended to the route.
- - def_name: drop_route
- doc: >
- Discards the current route. If there are targets remaining in **routing_targets**, creates a new route using those. Otherwise, a new route is created at random.
- - def_name: register_actor_constellation_callback
- params:
- - param_name: callback
- doc: >
- The function to be called whenever a RSS situation is about to be calculated.
- doc: >
- Register a callback to customize a carla.RssActorConstellationResult. By this callback the settings of RSS parameters are done per actor constellation and the settings (ego_vehicle_dynamics, other_vehicle_dynamics and pedestrian_dynamics) have no effect.
- - def_name: set_log_level
- params:
- - param_name: log_level
- type: carla.RssLogLevel
- doc: >
- New log level.
- doc: >
- Sets the log level.
- - def_name: set_map_log_level
- params:
- - param_name: log_level
- type: carla.RssLogLevel
- doc: >
- New map log level.
- doc: >
- Sets the map log level.
- # --------------------------------------
- - def_name: __str__
- # --------------------------------------
-
- - class_name: RssRestrictor
- parent:
- # - DESCRIPTION ------------------------
- doc: >
- These objects apply restrictions to a carla.VehicleControl. It is part of the CARLA implementation of the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib). This class works hand in hand with a [rss sensor](ref_sensors.md#rss-sensor), which provides the data of the restrictions to be applied.
- # - PROPERTIES -------------------------
- instance_variables:
-
- # - METHODS ----------------------------
- methods:
- - def_name: restrict_vehicle_control
- params:
- - param_name: vehicle_control
- type: carla.VehicleControl
- doc: >
- The input vehicle control to be restricted.
- - param_name: proper_response
- type: ad.rss.state.ProperResponse
- doc: >
- Part of the response generated by the sensor. Contains restrictions to be applied to the acceleration of the vehicle.
- - param_name: ego_dynamics_on_route
- type: carla.RssEgoDynamicsOnRoute
- doc: >
- Part of the response generated by the sensor. Contains dynamics and heading of the vehicle regarding its route.
- - param_name: vehicle_physics
- type: carla.VehiclePhysicsControl
- doc: >
- The current physics of the vehicle. Used to apply the restrictions properly.
- return:
- carla.VehicleControl
- doc: >
- Applies the safety restrictions given by a carla.RssSensor to a carla.VehicleControl.
- - def_name: set_log_level
- params:
- - param_name: log_level
- type: carla.RssLogLevel
- doc: >
- New log level.
- doc: >
- Sets the log level.
- # --------------------------------------
-
- - class_name: RssRoadBoundariesMode
- # - DESCRIPTION ------------------------
- doc: >
- Enum declaration used in carla.RssSensor to enable or disable the [stay on road](https://intel.github.io/ad-rss-lib/ad_rss_map_integration/HandleRoadBoundaries/) feature. In summary, this feature considers the road boundaries as virtual objects. The minimum safety distance check is applied to these virtual walls, in order to make sure the vehicle does not drive off the road.
- # - PROPERTIES -------------------------
- instance_variables:
- - var_name: 'On'
- doc: >
- Enables the _stay on road_ feature.
- # --------------------------------------
- - var_name: 'Off'
- doc: >
- Disables the _stay on road_ feature.
- # --------------------------------------
-
- - class_name: RssLogLevel
- # - DESCRIPTION ------------------------
- doc: >
- Enum declaration used in carla.RssSensor to set the log level.
- # - PROPERTIES -------------------------
- instance_variables:
- - var_name: 'trace'
- # --------------------------------------
- - var_name: 'debug'
- # --------------------------------------
- - var_name: 'info'
- # --------------------------------------
- - var_name: 'warn'
- # --------------------------------------
- - var_name: 'err'
- # --------------------------------------
- - var_name: 'critical'
- # --------------------------------------
- - var_name: 'off'
- # --------------------------------------
...
diff --git a/PythonAPI/docs/sensor_data.yml b/PythonAPI/docs/sensor_data.yml
index cdd394a5698..68763bf4933 100644
--- a/PythonAPI/docs/sensor_data.yml
+++ b/PythonAPI/docs/sensor_data.yml
@@ -14,7 +14,6 @@
- LIDAR sensor: carla.LidarMeasurement.
- Obstacle detector: carla.ObstacleDetectionEvent.
- Radar sensor: carla.RadarMeasurement.
- - RSS sensor: carla.RssResponse.
- Semantic LIDAR sensor: carla.SemanticLidarMeasurement.
- Cooperative awareness messages V2X sensor: carla.CAMEvent.
- Custom V2X messages V2X sensor: carla.CustomV2XEvent.
@@ -575,207 +574,6 @@
- def_name: __str__
# --------------------------------------
- - class_name: RssResponse
- parent: carla.SensorData
- # - DESCRIPTION ------------------------
- doc: >
- Class that contains the output of a carla.RssSensor. This is the result of the RSS calculations performed for the parent vehicle of the sensor.
-
-
- A carla.RssRestrictor will use the data to modify the carla.VehicleControl of the vehicle.
- # - PROPERTIES -------------------------
- instance_variables:
- - var_name: response_valid
- type: bool
- doc: >
- States if the response is valid. It is __False__ if calculations failed or an exception occured.
- # --------------------------------------
- - var_name: proper_response
- type: ad.rss.state.ProperResponse
- doc: >
- The proper response that the RSS calculated for the vehicle.
- # --------------------------------------
- - var_name: rss_state_snapshot
- type: ad.rss.state.RssStateSnapshot
- doc: >
- Detailed RSS states at the current moment in time.
- # --------------------------------------
- - var_name: ego_dynamics_on_route
- type: carla.RssEgoDynamicsOnRoute
- doc: >
- Current ego vehicle dynamics regarding the route.
- # --------------------------------------
- - var_name: world_model
- type: ad.rss.world.WorldModel
- doc: >
- World model used for calculations.
- # --------------------------------------
- - var_name: situation_snapshot
- type: ad.rss.situation.SituationSnapshot
- doc: >
- Detailed RSS situations extracted from the world model.
- # - METHODS ----------------------------
- methods:
- - def_name: __str__
- # --------------------------------------
-
- - class_name: RssEgoDynamicsOnRoute
- # - DESCRIPTION ------------------------
- doc: >
- Part of the data contained inside a carla.RssResponse describing the state of the vehicle. The parameters include its current dynamics, and how it is heading regarding the target route.
- # - PROPERTIES -------------------------
- instance_variables:
- ## ** Missing timestamp and time_since_epoch_check_start_ms
- - var_name: ego_speed
- type: ad.physics.Speed
- doc: >
- The ego vehicle's speed.
- # --------------------------------------
- - var_name: min_stopping_distance
- type: ad.physics.Distance
- doc: >
- The current minimum stopping distance.
- # --------------------------------------
- - var_name: ego_center
- type: ad.map.point.ENUPoint
- doc: >
- The considered enu position of the ego vehicle.
- # --------------------------------------
- - var_name: ego_heading
- type: ad.map.point.ENUHeading
- doc: >
- The considered heading of the ego vehicle.
- # --------------------------------------
- - var_name: ego_center_within_route
- type: bool
- doc: >
- States if the ego vehicle's center is within the route.
- # --------------------------------------
- - var_name: crossing_border
- type: bool
- doc: >
- States if the vehicle is already crossing one of the lane borders.
- # --------------------------------------
- - var_name: route_heading
- type: ad.map.point.ENUHeading
- doc: >
- The considered heading of the route.
- # --------------------------------------
- - var_name: route_nominal_center
- type: ad.map.point.ENUPoint
- doc: >
- The considered nominal center of the current route.
- # --------------------------------------
- - var_name: heading_diff
- type: ad.map.point.ENUHeading
- doc: >
- The considered heading diff towards the route.
- # --------------------------------------
- - var_name: route_speed_lat
- type: ad.physics.Speed
- doc: >
- The ego vehicle's speed component _lat_ regarding the route.
- # --------------------------------------
- - var_name: route_speed_lon
- type: ad.physics.Speed
- doc: >
- The ego vehicle's speed component _lon_ regarding the route.
- # --------------------------------------
- - var_name: route_accel_lat
- type: ad.physics.Acceleration
- doc: >
- The ego vehicle's acceleration component _lat_ regarding the route.
- # --------------------------------------
- - var_name: route_accel_lon
- type: ad.physics.Acceleration
- doc: >
- The ego vehicle's acceleration component _lon_ regarding the route.
- # --------------------------------------
- - var_name: avg_route_accel_lat
- type: ad.physics.Acceleration
- doc: >
- The ego vehicle's acceleration component _lat_ regarding the route smoothened by an average filter.
- # --------------------------------------
- - var_name: avg_route_accel_lon
- type: ad.physics.Acceleration
- doc: >
- The ego acceleration component _lon_ regarding the route smoothened by an average filter.
- # - METHODS ----------------------------
- methods:
- - def_name: __str__
- # --------------------------------------
-
- - class_name: RssActorConstellationData
- # - DESCRIPTION ------------------------
- doc: >
- Data structure that is provided within the callback registered by RssSensor.register_actor_constellation_callback().
- # - PROPERTIES -------------------------
- instance_variables:
- - var_name: ego_match_object
- type: ad.map.match.Object
- doc: >
- The ego map matched information.
- # --------------------------------------
- - var_name: ego_route
- type: ad.map.route.FullRoute
- doc: >
- The ego route.
- # --------------------------------------
- - var_name: ego_dynamics_on_route
- type: carla.RssEgoDynamicsOnRoute
- doc: >
- Current ego vehicle dynamics regarding the route.
- # --------------------------------------
- - var_name: other_match_object
- type: ad.map.match.Object
- doc: >
- The other object's map matched information. This is only valid if 'other_actor' is not 'None'.
- # --------------------------------------
- - var_name: other_actor
- type: carla.Actor
- doc: >
- The other actor. This is 'None' in case of query of default parameters or articial objects of kind ad.rss.world.ObjectType.ArtificialObject
- with no dedicated 'carla.Actor' (as e.g. for the [road boundaries](ref_sensors.md#rss-sensor) at the moment)
- # - METHODS ----------------------------
- methods:
- - def_name: __str__
- # --------------------------------------
-
- - class_name: RssActorConstellationResult
- # - DESCRIPTION ------------------------
- doc: >
- Data structure that should be returned by the callback registered by RssSensor.register_actor_constellation_callback().
- # - PROPERTIES -------------------------
- instance_variables:
- - var_name: rss_calculation_mode
- type: ad.rss.map.RssMode
- doc: >
- The calculation mode to be applied with the actor.
- # --------------------------------------
- - var_name: restrict_speed_limit_mode
- type: ad.rss.map.RestrictSpeedLimitMode
- doc: >
- The mode for restricting speed limit.
- # --------------------------------------
- - var_name: ego_vehicle_dynamics
- type: ad.rss.world.RssDynamics
- doc: >
- The RSS dynamics to be applied for the ego vehicle.
- # --------------------------------------
- - var_name: actor_object_type
- type: ad.rss.world.ObjectType
- doc: >
- The RSS object type to be used for the actor.
- # --------------------------------------
- - var_name: actor_dynamics
- type: ad.rss.world.RssDynamics
- doc: >
- The RSS dynamics to be applied for the actor.
- # - METHODS ----------------------------
- methods:
- - def_name: __str__
- # --------------------------------------
-
- class_name: DVSEvent
# - DESCRIPTION ------------------------
doc: >
diff --git a/PythonAPI/examples/rss/__init__.py b/PythonAPI/examples/rss/__init__.py
deleted file mode 100644
index e69de29bb2d..00000000000
diff --git a/PythonAPI/examples/rss/manual_control_rss.py b/PythonAPI/examples/rss/manual_control_rss.py
deleted file mode 100755
index ff0b7e1c5b4..00000000000
--- a/PythonAPI/examples/rss/manual_control_rss.py
+++ /dev/null
@@ -1,921 +0,0 @@
-#!/usr/bin/env python
-
-# Copyright (c) 2019 Computer Vision Center (CVC) at the Universitat Autonoma de
-# Barcelona (UAB).
-# Copyright (c) 2019-2020 Intel Corporation
-#
-# This work is licensed under the terms of the MIT license.
-# For a copy, see .
-
-# Allows controlling a vehicle with a keyboard. For a simpler and more
-# documented example, please take a look at tutorial.py.
-
-"""
-Welcome to CARLA manual control.
-
-Use ARROWS or WASD keys for control.
-
- W : throttle
- S : brake
- AD : steer
- Q : toggle reverse
- Space : hand-brake
- P : toggle autopilot
-
- TAB : change view
- Backspace : change vehicle
-
- R : toggle recording images to disk
-
- F2 : toggle RSS visualization mode
- F3 : increase log level
- F4 : decrease log level
- F5 : increase map log level
- F6 : decrease map log level
- B : toggle RSS Road Boundaries Mode
- G : RSS check drop current route
- T : toggle RSS
- N : pause simulation
-
- F1 : toggle HUD
- H/? : toggle help
- ESC : quit
-"""
-
-from __future__ import print_function
-
-
-# ==============================================================================
-# -- find carla module ---------------------------------------------------------
-# ==============================================================================
-
-
-import glob
-import os
-import sys
-import signal
-
-try:
- sys.path.append(glob.glob(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + '/carla/dist/carla-*%d.%d-%s.egg' % (
- sys.version_info.major,
- sys.version_info.minor,
- 'win-amd64' if os.name == 'nt' else 'linux-x86_64'))[0])
-except IndexError:
- pass
-
-
-# ==============================================================================
-# -- imports -------------------------------------------------------------------
-# ==============================================================================
-
-
-import carla
-
-from carla import ColorConverter as cc
-
-import argparse
-import logging
-import math
-import random
-import weakref
-from rss_sensor import RssSensor # pylint: disable=relative-import
-from rss_visualization import RssUnstructuredSceneVisualizer, RssBoundingBoxVisualizer, RssStateVisualizer # pylint: disable=relative-import
-
-try:
- import pygame
- from pygame.locals import KMOD_CTRL
- from pygame.locals import KMOD_SHIFT
- from pygame.locals import K_BACKSPACE
- from pygame.locals import K_TAB
- from pygame.locals import K_DOWN
- from pygame.locals import K_ESCAPE
- from pygame.locals import K_F1
- from pygame.locals import K_F2
- from pygame.locals import K_F3
- from pygame.locals import K_F4
- from pygame.locals import K_F5
- from pygame.locals import K_F6
- from pygame.locals import K_LEFT
- from pygame.locals import K_RIGHT
- from pygame.locals import K_SLASH
- from pygame.locals import K_SPACE
- from pygame.locals import K_UP
- from pygame.locals import K_a
- from pygame.locals import K_b
- from pygame.locals import K_d
- from pygame.locals import K_g
- from pygame.locals import K_h
- from pygame.locals import K_n
- from pygame.locals import K_p
- from pygame.locals import K_q
- from pygame.locals import K_r
- from pygame.locals import K_s
- from pygame.locals import K_w
- from pygame.locals import K_l
- from pygame.locals import K_i
- from pygame.locals import K_z
- from pygame.locals import K_x
- from pygame.locals import MOUSEBUTTONDOWN
- from pygame.locals import MOUSEBUTTONUP
-except ImportError:
- raise RuntimeError('cannot import pygame, make sure pygame package is installed')
-
-try:
- import numpy as np
-except ImportError:
- raise RuntimeError('cannot import numpy, make sure numpy package is installed')
-
-
-# ==============================================================================
-# -- World ---------------------------------------------------------------------
-# ==============================================================================
-
-
-class World(object):
-
- def __init__(self, carla_world, args):
- self.world = carla_world
- self.sync = args.sync
- self.actor_role_name = args.rolename
- self.dim = (args.width, args.height)
- try:
- self.map = self.world.get_map()
- except RuntimeError as error:
- print('RuntimeError: {}'.format(error))
- print(' The server could not send the OpenDRIVE (.xodr) file:')
- print(' Make sure it exists, has the same name of your town, and is correct.')
- sys.exit(1)
- self.external_actor = args.externalActor
-
- self.hud = HUD(args.width, args.height, carla_world)
- self.recording_frame_num = 0
- self.recording = False
- self.recording_dir_num = 0
- self.player = None
- self.actors = []
- self.rss_sensor = None
- self.rss_unstructured_scene_visualizer = None
- self.rss_bounding_box_visualizer = None
- self._actor_filter = args.filter
- if not self._actor_filter.startswith("vehicle."):
- print('Error: RSS only supports vehicles as ego.')
- sys.exit(1)
-
- self.restart()
- self.world_tick_id = self.world.on_tick(self.hud.on_world_tick)
-
- def toggle_pause(self):
- settings = self.world.get_settings()
- self.pause_simulation(not settings.synchronous_mode)
-
- def pause_simulation(self, pause):
- settings = self.world.get_settings()
- if pause and not settings.synchronous_mode:
- settings.synchronous_mode = True
- settings.fixed_delta_seconds = 0.05
- self.world.apply_settings(settings)
- elif not pause and settings.synchronous_mode:
- settings.synchronous_mode = False
- settings.fixed_delta_seconds = None
- self.world.apply_settings(settings)
-
- def restart(self):
-
- if self.external_actor:
- # Check whether there is already an actor with defined role name
- for actor in self.world.get_actors():
- if actor.attributes.get('role_name') == self.actor_role_name:
- self.player = actor
- break
- else:
- # Get a random blueprint.
- blueprint = random.choice(self.world.get_blueprint_library().filter(self._actor_filter))
- blueprint.set_attribute('role_name', self.actor_role_name)
- if blueprint.has_attribute('color'):
- color = random.choice(blueprint.get_attribute('color').recommended_values)
- blueprint.set_attribute('color', color)
- if blueprint.has_attribute('driver_id'):
- driver_id = random.choice(blueprint.get_attribute('driver_id').recommended_values)
- blueprint.set_attribute('driver_id', driver_id)
- if blueprint.has_attribute('is_invincible'):
- blueprint.set_attribute('is_invincible', 'true')
- # Spawn the player.
- if self.player is not None:
- spawn_point = self.player.get_transform()
- spawn_point.location.z += 2.0
- spawn_point.rotation.roll = 0.0
- spawn_point.rotation.pitch = 0.0
- self.destroy()
- self.player = self.world.try_spawn_actor(blueprint, spawn_point)
- while self.player is None:
- if not self.map.get_spawn_points():
- print('There are no spawn points available in your map/town.')
- print('Please add some Vehicle Spawn Point to your UE4 scene.')
- sys.exit(1)
- spawn_points = self.map.get_spawn_points()
- spawn_point = random.choice(spawn_points) if spawn_points else carla.Transform()
- self.player = self.world.try_spawn_actor(blueprint, spawn_point)
-
- if self.external_actor:
- ego_sensors = []
- for actor in self.world.get_actors():
- if actor.parent == self.player:
- ego_sensors.append(actor)
-
- for ego_sensor in ego_sensors:
- if ego_sensor is not None:
- ego_sensor.destroy()
-
- # Set up the sensors.
- self.camera = Camera(self.player, self.dim)
- self.rss_unstructured_scene_visualizer = RssUnstructuredSceneVisualizer(self.player, self.world, self.dim)
- self.rss_bounding_box_visualizer = RssBoundingBoxVisualizer(self.dim, self.world, self.camera.sensor)
- self.rss_sensor = RssSensor(self.player, self.world,
- self.rss_unstructured_scene_visualizer, self.rss_bounding_box_visualizer, self.hud.rss_state_visualizer)
-
- if self.sync:
- self.world.tick()
- else:
- self.world.wait_for_tick()
-
- def tick(self, clock):
- self.hud.tick(self.player, clock)
-
- def toggle_recording(self):
- if not self.recording:
- dir_name = "_out%04d" % self.recording_dir_num
- while os.path.exists(dir_name):
- self.recording_dir_num += 1
- dir_name = "_out%04d" % self.recording_dir_num
- self.recording_frame_num = 0
- os.mkdir(dir_name)
- else:
- self.hud.notification('Recording finished (folder: _out%04d)' % self.recording_dir_num)
-
- self.recording = not self.recording
-
- def render(self, display):
- self.camera.render(display)
- self.rss_bounding_box_visualizer.render(display, self.camera.current_frame)
- self.rss_unstructured_scene_visualizer.render(display)
- self.hud.render(display)
-
- if self.recording:
- pygame.image.save(display, "_out%04d/%08d.bmp" % (self.recording_dir_num, self.recording_frame_num))
- self.recording_frame_num += 1
-
- def destroy(self):
- # stop from ticking
- if self.world_tick_id:
- self.world.remove_on_tick(self.world_tick_id)
-
- if self.camera:
- self.camera.destroy()
- if self.rss_sensor:
- self.rss_sensor.destroy()
- if self.rss_unstructured_scene_visualizer:
- self.rss_unstructured_scene_visualizer.destroy()
- if self.player:
- self.player.destroy()
-
-
-# ==============================================================================
-# -- Camera --------------------------------------------------------------------
-# ==============================================================================
-
-class Camera(object):
-
- def __init__(self, parent_actor, display_dimensions):
- self.surface = None
- self._parent = parent_actor
- self.current_frame = None
- bp_library = self._parent.get_world().get_blueprint_library()
- bp = bp_library.find('sensor.camera.rgb')
- bp.set_attribute('image_size_x', str(display_dimensions[0]))
- bp.set_attribute('image_size_y', str(display_dimensions[1]))
- self.sensor = self._parent.get_world().spawn_actor(bp, carla.Transform(carla.Location(
- x=-5.5, z=2.5), carla.Rotation(pitch=8.0)), attach_to=self._parent, attachment_type=carla.AttachmentType.SpringArmGhost)
-
- # We need to pass the lambda a weak reference to self to avoid
- # circular reference.
- weak_self = weakref.ref(self)
- self.sensor.listen(lambda image: Camera._parse_image(weak_self, image))
-
- def destroy(self):
- self.sensor.stop()
- self.sensor.destroy()
- self.sensor = None
-
- def render(self, display):
- if self.surface is not None:
- display.blit(self.surface, (0, 0))
-
- @staticmethod
- def _parse_image(weak_self, image):
- self = weak_self()
- if not self:
- return
- self.current_frame = image.frame
- image.convert(cc.Raw)
- array = np.frombuffer(image.raw_data, dtype=np.dtype("uint8"))
- array = np.reshape(array, (image.height, image.width, 4))
- array = array[:, :, :3]
- array = array[:, :, ::-1]
- self.surface = pygame.surfarray.make_surface(array.swapaxes(0, 1))
-
-# ==============================================================================
-# -- VehicleControl -----------------------------------------------------------
-# ==============================================================================
-
-
-class VehicleControl(object):
-
- MOUSE_STEERING_RANGE = 200
- signal_received = False
-
- """Class that handles keyboard input."""
-
- def __init__(self, world, start_in_autopilot):
- self._autopilot_enabled = start_in_autopilot
- self._world = world
- self._control = carla.VehicleControl()
- self._lights = carla.VehicleLightState.NONE
- world.player.set_autopilot(self._autopilot_enabled)
- self._restrictor = carla.RssRestrictor()
- self._vehicle_physics = world.player.get_physics_control()
- world.player.set_light_state(self._lights)
- self._steer_cache = 0.0
- self._mouse_steering_center = None
-
- self._surface = pygame.Surface((self.MOUSE_STEERING_RANGE * 2, self.MOUSE_STEERING_RANGE * 2))
- self._surface.set_colorkey(pygame.Color('black'))
- self._surface.set_alpha(60)
-
- line_width = 2
- pygame.draw.polygon(self._surface,
- (0, 0, 255),
- [
- (0, 0),
- (0, self.MOUSE_STEERING_RANGE * 2 - line_width),
- (self.MOUSE_STEERING_RANGE * 2 - line_width,
- self.MOUSE_STEERING_RANGE * 2 - line_width),
- (self.MOUSE_STEERING_RANGE * 2 - line_width, 0),
- (0, 0)
- ], line_width)
- pygame.draw.polygon(self._surface,
- (0, 0, 255),
- [
- (0, self.MOUSE_STEERING_RANGE),
- (self.MOUSE_STEERING_RANGE * 2, self.MOUSE_STEERING_RANGE)
- ], line_width)
- pygame.draw.polygon(self._surface,
- (0, 0, 255),
- [
- (self.MOUSE_STEERING_RANGE, 0),
- (self.MOUSE_STEERING_RANGE, self.MOUSE_STEERING_RANGE * 2)
- ], line_width)
-
- world.hud.notification("Press 'H' or '?' for help.", seconds=4.0)
-
- def render(self, display):
- if self._mouse_steering_center:
- display.blit(
- self._surface, (self._mouse_steering_center[0] - self.MOUSE_STEERING_RANGE, self._mouse_steering_center[1] - self.MOUSE_STEERING_RANGE))
-
- @staticmethod
- def signal_handler(signum, _):
- print('\nReceived signal {}. Trigger stopping...'.format(signum))
- VehicleControl.signal_received = True
-
- def parse_events(self, world, clock, sync_mode):
- if VehicleControl.signal_received:
- print('\nAccepted signal. Stopping loop...')
- return True
- if isinstance(self._control, carla.VehicleControl):
- current_lights = self._lights
- for event in pygame.event.get():
- if event.type == pygame.QUIT:
- return True
- elif event.type == pygame.KEYUP:
- if self._is_quit_shortcut(event.key):
- return True
- elif event.key == K_BACKSPACE:
- if self._autopilot_enabled:
- world.player.set_autopilot(False)
- world.restart()
- world.player.set_autopilot(True)
- else:
- world.restart()
- elif event.key == K_F1:
- world.hud.toggle_info()
- elif event.key == K_h or (event.key == K_SLASH and pygame.key.get_mods() & KMOD_SHIFT):
- world.hud.help.toggle()
- elif event.key == K_TAB:
- world.rss_unstructured_scene_visualizer.toggle_camera()
- elif event.key == K_n:
- world.toggle_pause()
- elif event.key == K_r:
- world.toggle_recording()
- elif event.key == K_F2:
- if self._world and self._world.rss_sensor:
- self._world.rss_sensor.toggle_debug_visualization_mode()
- elif event.key == K_F3:
- if self._world and self._world.rss_sensor:
- self._world.rss_sensor.decrease_log_level()
- self._restrictor.set_log_level(self._world.rss_sensor.log_level)
- elif event.key == K_F4:
- if self._world and self._world.rss_sensor:
- self._world.rss_sensor.increase_log_level()
- self._restrictor.set_log_level(self._world.rss_sensor.log_level)
- elif event.key == K_F5:
- if self._world and self._world.rss_sensor:
- self._world.rss_sensor.decrease_map_log_level()
- elif event.key == K_F6:
- if self._world and self._world.rss_sensor:
- self._world.rss_sensor.increase_map_log_level()
- elif event.key == K_b:
- if self._world and self._world.rss_sensor:
- if self._world.rss_sensor.sensor.road_boundaries_mode == carla.RssRoadBoundariesMode.Off:
- self._world.rss_sensor.sensor.road_boundaries_mode = carla.RssRoadBoundariesMode.On
- print("carla.RssRoadBoundariesMode.On")
- else:
- self._world.rss_sensor.sensor.road_boundaries_mode = carla.RssRoadBoundariesMode.Off
- print("carla.RssRoadBoundariesMode.Off")
- elif event.key == K_g:
- if self._world and self._world.rss_sensor:
- self._world.rss_sensor.drop_route()
- if isinstance(self._control, carla.VehicleControl):
- if event.key == K_q:
- self._control.gear = 1 if self._control.reverse else -1
- elif event.key == K_p and not pygame.key.get_mods() & KMOD_CTRL:
- self._autopilot_enabled = not self._autopilot_enabled
- world.player.set_autopilot(self._autopilot_enabled)
- world.hud.notification(
- 'Autopilot %s' % ('On' if self._autopilot_enabled else 'Off'))
- elif event.key == K_l and pygame.key.get_mods() & KMOD_CTRL:
- current_lights ^= carla.VehicleLightState.Special1
- elif event.key == K_l and pygame.key.get_mods() & KMOD_SHIFT:
- current_lights ^= carla.VehicleLightState.HighBeam
- elif event.key == K_l:
- # Use 'L' key to switch between lights:
- # closed -> position -> low beam -> fog
- if not self._lights & carla.VehicleLightState.Position:
- world.hud.notification("Position lights")
- current_lights |= carla.VehicleLightState.Position
- else:
- world.hud.notification("Low beam lights")
- current_lights |= carla.VehicleLightState.LowBeam
- if self._lights & carla.VehicleLightState.LowBeam:
- world.hud.notification("Fog lights")
- current_lights |= carla.VehicleLightState.Fog
- if self._lights & carla.VehicleLightState.Fog:
- world.hud.notification("Lights off")
- current_lights ^= carla.VehicleLightState.Position
- current_lights ^= carla.VehicleLightState.LowBeam
- current_lights ^= carla.VehicleLightState.Fog
- elif event.key == K_i:
- current_lights ^= carla.VehicleLightState.Interior
- elif event.key == K_z:
- current_lights ^= carla.VehicleLightState.LeftBlinker
- elif event.key == K_x:
- current_lights ^= carla.VehicleLightState.RightBlinker
- elif event.type == MOUSEBUTTONDOWN:
- # store current mouse position for mouse-steering
- if event.button == 1:
- self._mouse_steering_center = event.pos
- elif event.type == MOUSEBUTTONUP:
- if event.button == 1:
- self._mouse_steering_center = None
- if not self._autopilot_enabled:
- prev_steer_cache = self._steer_cache
- self._parse_vehicle_keys(pygame.key.get_pressed(), clock.get_time())
- if pygame.mouse.get_pressed()[0]:
- self._parse_mouse(pygame.mouse.get_pos())
- self._control.reverse = self._control.gear < 0
-
- vehicle_control = self._control
- world.hud.original_vehicle_control = vehicle_control
- world.hud.restricted_vehicle_control = vehicle_control
-
- # limit speed to 30kmh
- v = self._world.player.get_velocity()
- if (3.6 * math.sqrt(v.x**2 + v.y**2 + v.z**2)) > 30.0:
- self._control.throttle = 0
-
- # if self._world.rss_sensor and self._world.rss_sensor.ego_dynamics_on_route and not self._world.rss_sensor.ego_dynamics_on_route.ego_center_within_route:
- # print ("Not on route!" + str(self._world.rss_sensor.ego_dynamics_on_route))
- if self._restrictor:
- rss_proper_response = self._world.rss_sensor.proper_response if self._world.rss_sensor and self._world.rss_sensor.response_valid else None
- if rss_proper_response:
- if not (pygame.key.get_mods() & KMOD_CTRL):
- vehicle_control = self._restrictor.restrict_vehicle_control(
- vehicle_control, rss_proper_response, self._world.rss_sensor.ego_dynamics_on_route, self._vehicle_physics)
- world.hud.restricted_vehicle_control = vehicle_control
- world.hud.allowed_steering_ranges = self._world.rss_sensor.get_steering_ranges()
- if world.hud.original_vehicle_control.steer != world.hud.restricted_vehicle_control.steer:
- self._steer_cache = prev_steer_cache
-
- # Set automatic control-related vehicle lights
- if vehicle_control.brake:
- current_lights |= carla.VehicleLightState.Brake
- else: # Remove the Brake flag
- current_lights &= carla.VehicleLightState.All ^ carla.VehicleLightState.Brake
- if vehicle_control.reverse:
- current_lights |= carla.VehicleLightState.Reverse
- else: # Remove the Reverse flag
- current_lights &= carla.VehicleLightState.All ^ carla.VehicleLightState.Reverse
- if current_lights != self._lights: # Change the light state only if necessary
- self._lights = current_lights
- world.player.set_light_state(carla.VehicleLightState(self._lights))
-
- world.player.apply_control(vehicle_control)
-
- def _parse_vehicle_keys(self, keys, milliseconds):
- if keys[K_UP] or keys[K_w]:
- self._control.throttle = min(self._control.throttle + 0.2, 1)
- else:
- self._control.throttle = max(self._control.throttle - 0.2, 0)
-
- if keys[K_DOWN] or keys[K_s]:
- self._control.brake = min(self._control.brake + 0.2, 1)
- else:
- self._control.brake = max(self._control.brake - 0.2, 0)
-
- steer_increment = 5e-4 * milliseconds
- if keys[K_LEFT] or keys[K_a]:
- if self._steer_cache > 0:
- self._steer_cache = 0
- else:
- self._steer_cache -= steer_increment
- elif keys[K_RIGHT] or keys[K_d]:
- if self._steer_cache < 0:
- self._steer_cache = 0
- else:
- self._steer_cache += steer_increment
- elif self._steer_cache > 0:
- self._steer_cache = max(self._steer_cache - steer_increment, 0.0)
- elif self._steer_cache < 0:
- self._steer_cache = min(self._steer_cache + steer_increment, 0.0)
- else:
- self._steer_cache = 0
-
- self._steer_cache = min(1.0, max(-1.0, self._steer_cache))
- self._control.steer = round(self._steer_cache, 1)
- self._control.hand_brake = keys[K_SPACE]
-
- def _parse_mouse(self, pos):
- if not self._mouse_steering_center:
- return
-
- lateral = float(pos[0] - self._mouse_steering_center[0])
- longitudinal = float(pos[1] - self._mouse_steering_center[1])
- max_val = self.MOUSE_STEERING_RANGE
- lateral = -max_val if lateral < -max_val else max_val if lateral > max_val else lateral
- longitudinal = -max_val if longitudinal < -max_val else max_val if longitudinal > max_val else longitudinal
- self._control.steer = lateral / max_val
- if longitudinal < 0.0:
- self._control.throttle = -longitudinal / max_val
- self._control.brake = 0.0
- elif longitudinal > 0.0:
- self._control.throttle = 0.0
- self._control.brake = longitudinal / max_val
-
- @staticmethod
- def _is_quit_shortcut(key):
- return (key == K_ESCAPE) or (key == K_q and pygame.key.get_mods() & KMOD_CTRL)
-
-
-# ==============================================================================
-# -- HUD -----------------------------------------------------------------------
-# ==============================================================================
-
-
-class HUD(object):
-
- def __init__(self, width, height, world):
- self.dim = (width, height)
- self._world = world
- self.map_name = world.get_map().name
- font = pygame.font.Font(pygame.font.get_default_font(), 20)
- font_name = 'courier' if os.name == 'nt' else 'mono'
- fonts = [x for x in pygame.font.get_fonts() if font_name in x]
- default_font = 'ubuntumono'
- mono = default_font if default_font in fonts else fonts[0]
- mono = pygame.font.match_font(mono)
- self._font_mono = pygame.font.Font(mono, 12 if os.name == 'nt' else 14)
- self._notifications = FadingText(font, (width, 40), (0, height - 40))
- self.help = HelpText(pygame.font.Font(mono, 16), width, height)
- self.server_fps = 0
- self.frame = 0
- self.simulation_time = 0
- self.original_vehicle_control = None
- self.restricted_vehicle_control = None
- self.allowed_steering_ranges = []
- self._show_info = True
- self._info_text = []
- self._server_clock = pygame.time.Clock()
- self.rss_state_visualizer = RssStateVisualizer(self.dim, self._font_mono, self._world)
-
- def on_world_tick(self, timestamp):
- self._server_clock.tick()
- self.server_fps = self._server_clock.get_fps()
- self.frame = timestamp.frame
- self.simulation_time = timestamp.elapsed_seconds
-
- def tick(self, player, clock):
- self._notifications.tick(clock)
- if not self._show_info:
- return
- t = player.get_transform()
- v = player.get_velocity()
- c = player.get_control()
-
- self._info_text = [
- 'Server: % 16.0f FPS' % self.server_fps,
- 'Client: % 16.0f FPS' % clock.get_fps(),
- 'Map: % 20s' % self.map_name,
- '',
- 'Speed: % 15.0f km/h' % (3.6 * math.sqrt(v.x**2 + v.y**2 + v.z**2)),
- 'Location:% 20s' % ('(% 5.1f, % 5.1f)' % (t.location.x, t.location.y)),
- 'Heading: % 20.2f' % math.radians(t.rotation.yaw),
- '']
- if self.original_vehicle_control:
- orig_control = self.original_vehicle_control
- restricted_control = self.restricted_vehicle_control
- allowed_steering_ranges = self.allowed_steering_ranges
- self._info_text += [
- ('Throttle:', orig_control.throttle, 0.0, 1.0, restricted_control.throttle),
- ('Steer:', orig_control.steer, -1.0, 1.0, restricted_control.steer, allowed_steering_ranges),
- ('Brake:', orig_control.brake, 0.0, 1.0, restricted_control.brake)]
- self._info_text += [
- ('Reverse:', c.reverse),
- '']
-
- def toggle_info(self):
- self._show_info = not self._show_info
-
- def notification(self, text, seconds=2.0):
- self._notifications.set_text(text, seconds=seconds)
-
- def error(self, text):
- self._notifications.set_text('Error: %s' % text, (255, 0, 0))
-
- def render(self, display):
- if self._show_info:
- info_surface = pygame.Surface((220, self.dim[1]))
- info_surface.set_alpha(100)
- display.blit(info_surface, (0, 0))
- v_offset = 4
- bar_h_offset = 100
- bar_width = 106
- for item in self._info_text:
- text_color = (255, 255, 255)
- if v_offset + 18 > self.dim[1]:
- break
- if isinstance(item, list):
- if len(item) > 1:
- points = [(x + 8, v_offset + 8 + (1.0 - y) * 30) for x, y in enumerate(item)]
- pygame.draw.lines(display, (255, 136, 0), False, points, 2)
- item = None
- v_offset += 18
- elif isinstance(item, tuple):
- if isinstance(item[1], bool):
- rect = pygame.Rect((bar_h_offset, v_offset + 2), (10, 10))
- pygame.draw.rect(display, (255, 255, 255), rect, 0 if item[1] else 1)
- else:
- # draw allowed steering ranges
- if len(item) == 6 and item[2] < 0.0:
- for steering_range in item[5]:
- starting_value = min(steering_range[0], steering_range[1])
- length = (max(steering_range[0], steering_range[1]) -
- min(steering_range[0], steering_range[1])) / 2
- rect = pygame.Rect(
- (bar_h_offset + (starting_value + 1) * (bar_width / 2), v_offset + 2), (length * bar_width, 14))
- pygame.draw.rect(display, (0, 255, 0), rect)
-
- # draw border
- rect_border = pygame.Rect((bar_h_offset, v_offset + 2), (bar_width, 14))
- pygame.draw.rect(display, (255, 255, 255), rect_border, 1)
-
- # draw value / restricted value
- input_value_rect_fill = 0
- if len(item) >= 5:
- if item[1] != item[4]:
- input_value_rect_fill = 1
- f = (item[4] - item[2]) / (item[3] - item[2])
- if item[2] < 0.0:
- rect = pygame.Rect(
- (bar_h_offset + 1 + f * (bar_width - 6), v_offset + 3), (12, 12))
- else:
- rect = pygame.Rect((bar_h_offset + 1, v_offset + 3), (f * bar_width, 12))
- pygame.draw.rect(display, (255, 0, 0), rect)
-
- f = (item[1] - item[2]) / (item[3] - item[2])
- rect = None
- if item[2] < 0.0:
- rect = pygame.Rect((bar_h_offset + 2 + f * (bar_width - 14), v_offset + 4), (10, 10))
- else:
- if item[1] != 0:
- rect = pygame.Rect((bar_h_offset + 2, v_offset + 4), (f * (bar_width - 4), 10))
- if rect:
- pygame.draw.rect(display, (255, 255, 255), rect, input_value_rect_fill)
- item = item[0]
- if item: # At this point has to be a str.
- surface = self._font_mono.render(item, True, text_color)
- display.blit(surface, (8, v_offset))
- v_offset += 18
-
- self.rss_state_visualizer.render(display, v_offset)
- self._notifications.render(display)
- self.help.render(display)
-
-
-# ==============================================================================
-# -- FadingText ----------------------------------------------------------------
-# ==============================================================================
-
-
-class FadingText(object):
-
- def __init__(self, font, dim, pos):
- self.font = font
- self.dim = dim
- self.pos = pos
- self.seconds_left = 0
- self.surface = pygame.Surface(self.dim)
-
- def set_text(self, text, color=(255, 255, 255), seconds=2.0):
- text_texture = self.font.render(text, True, color)
- self.surface = pygame.Surface(self.dim)
- self.seconds_left = seconds
- self.surface.fill((0, 0, 0, 0))
- self.surface.blit(text_texture, (10, 11))
-
- def tick(self, clock):
- delta_seconds = 1e-3 * clock.get_time()
- self.seconds_left = max(0.0, self.seconds_left - delta_seconds)
- self.surface.set_alpha(500.0 * self.seconds_left)
-
- def render(self, display):
- display.blit(self.surface, self.pos)
-
-
-# ==============================================================================
-# -- HelpText ------------------------------------------------------------------
-# ==============================================================================
-
-
-class HelpText(object):
-
- """Helper class to handle text output using pygame"""
-
- def __init__(self, font, width, height):
- lines = __doc__.split('\n')
- self.font = font
- self.line_space = 18
- self.dim = (780, len(lines) * self.line_space + 12)
- self.pos = (0.5 * width - 0.5 * self.dim[0], 0.5 * height - 0.5 * self.dim[1])
- self.seconds_left = 0
- self.surface = pygame.Surface(self.dim)
- self.surface.fill((0, 0, 0, 0))
- for n, line in enumerate(lines):
- text_texture = self.font.render(line, True, (255, 255, 255))
- self.surface.blit(text_texture, (22, n * self.line_space))
- self._render = False
- self.surface.set_alpha(220)
-
- def toggle(self):
- self._render = not self._render
-
- def render(self, display):
- if self._render:
- display.blit(self.surface, self.pos)
-
-# ==============================================================================
-# -- game_loop() ---------------------------------------------------------------
-# ==============================================================================
-
-
-def game_loop(args):
- pygame.init()
- pygame.font.init()
- world = None
-
- try:
- client = carla.Client(args.host, args.port)
- client.set_timeout(2.0)
-
- display = pygame.display.set_mode(
- (args.width, args.height),
- pygame.HWSURFACE | pygame.DOUBLEBUF)
-
- sim_world = client.get_world()
- original_settings = sim_world.get_settings()
- settings = sim_world.get_settings()
- if args.sync != settings.synchronous_mode:
- args.sync = True
- settings.synchronous_mode = True
- settings.fixed_delta_seconds = 0.05
- sim_world.apply_settings(settings)
-
- traffic_manager = client.get_trafficmanager()
- traffic_manager.set_synchronous_mode(True)
-
- world = World(sim_world, args)
- controller = VehicleControl(world, args.autopilot)
-
- clock = pygame.time.Clock()
- while True:
- if args.sync:
- sim_world.tick()
- clock.tick_busy_loop(60)
- if controller.parse_events(world, clock, args.sync):
- return
- world.tick(clock)
- world.render(display)
- controller.render(display)
- pygame.display.flip()
-
- finally:
-
- if world is not None:
- print('Destroying the world...')
- world.destroy()
- print('Destroyed!')
-
- pygame.quit()
-
-
-# ==============================================================================
-# -- main() --------------------------------------------------------------------
-# ==============================================================================
-
-def main():
- argparser = argparse.ArgumentParser(
- description='CARLA Manual Control Client RSS')
- argparser.add_argument(
- '-v', '--verbose',
- action='store_true',
- dest='debug',
- help='print debug information')
- argparser.add_argument(
- '--host',
- metavar='H',
- default='127.0.0.1',
- help='IP of the host server (default: 127.0.0.1)')
- argparser.add_argument(
- '-p', '--port',
- metavar='P',
- default=2000,
- type=int,
- help='TCP port to listen to (default: 2000)')
- argparser.add_argument(
- '-a', '--autopilot',
- action='store_true',
- help='enable autopilot')
- argparser.add_argument(
- '--res',
- metavar='WIDTHxHEIGHT',
- default='1280x720',
- help='window resolution (default: 1280x720)')
- argparser.add_argument(
- '--filter',
- metavar='PATTERN',
- default='vehicle.*',
- help='actor filter (default: "vehicle.*")')
- argparser.add_argument(
- '--rolename',
- metavar='NAME',
- default='hero',
- help='actor role name (default: "hero")')
- argparser.add_argument(
- '--externalActor',
- action='store_true',
- help='attaches to externally created actor by role name')
- argparser.add_argument(
- '--sync',
- action='store_true',
- help='Activate synchronous mode execution')
- args = argparser.parse_args()
-
- args.width, args.height = [int(x) for x in args.res.split('x')]
-
- log_level = logging.DEBUG if args.debug else logging.INFO
- logging.basicConfig(format='%(levelname)s: %(message)s', level=log_level)
-
- logging.info('listening to server %s:%s', args.host, args.port)
-
- print(__doc__)
-
- signal.signal(signal.SIGINT, VehicleControl.signal_handler)
-
- try:
- game_loop(args)
-
- except KeyboardInterrupt:
- print('\nCancelled by user. Bye!')
-
-
-if __name__ == '__main__':
-
- main()
diff --git a/PythonAPI/examples/rss/rss_sensor.py b/PythonAPI/examples/rss/rss_sensor.py
deleted file mode 100644
index 02266071897..00000000000
--- a/PythonAPI/examples/rss/rss_sensor.py
+++ /dev/null
@@ -1,457 +0,0 @@
-#!/usr/bin/env python
-#
-# Copyright (c) 2020 Intel Corporation
-#
-
-import glob
-import os
-import sys
-
-try:
- sys.path.append(glob.glob(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + '/carla/dist/carla-*%d.%d-%s.egg' % (
- sys.version_info.major,
- sys.version_info.minor,
- 'win-amd64' if os.name == 'nt' else 'linux-x86_64'))[0])
-except IndexError:
- pass
-
-import inspect
-import carla
-from carla import ad
-import math
-from rss_visualization import RssDebugVisualizer # pylint: disable=relative-import
-
-
-# ==============================================================================
-# -- RssSensor -----------------------------------------------------------------
-# ==============================================================================
-
-
-class RssStateInfo(object):
-
- def __init__(self, rss_state, ego_dynamics_on_route, world_model):
- self.rss_state = rss_state
- self.distance = -1
- self.is_dangerous = ad.rss.state.isDangerous(rss_state)
- if rss_state.situationType == ad.rss.situation.SituationType.Unstructured:
- self.actor_calculation_mode = ad.rss.map.RssMode.Unstructured
- else:
- self.actor_calculation_mode = ad.rss.map.RssMode.Structured
-
- # calculate distance to other vehicle
- object_state = None
- for scene in world_model.scenes:
- if scene.object.objectId == rss_state.objectId:
- object_state = scene.object.state
- break
-
- if object_state:
- self.distance = math.sqrt((float(ego_dynamics_on_route.ego_center.x) - float(object_state.centerPoint.x))**2 +
- (float(ego_dynamics_on_route.ego_center.y) - float(object_state.centerPoint.y))**2)
-
- self.longitudinal_margin = float(rss_state.longitudinalState.rssStateInformation.currentDistance - rss_state.longitudinalState.rssStateInformation.safeDistance)
- self.margin = max(0, self.longitudinal_margin)
- self.lateral_margin = None
- if rss_state.lateralStateLeft.rssStateInformation.evaluator != "None":
- self.lateral_margin = rss_state.lateralStateLeft.rssStateInformation.currentDistance - rss_state.lateralStateLeft.rssStateInformation.safeDistance
- if rss_state.lateralStateRight.rssStateInformation.evaluator != "None":
- lateral_margin_right = rss_state.lateralStateRight.rssStateInformation.currentDistance - rss_state.lateralStateRight.rssStateInformation.safeDistance
- if self.lateral_margin==None or self.lateral_margin > lateral_margin_right:
- self.lateral_margin=lateral_margin_right
- if self.lateral_margin!=None and self.lateral_margin>0:
- self.margin += self.lateral_margin
-
- def get_actor(self, world):
- if self.rss_state.objectId == 18446744073709551614:
- return None # "Border Left"
- elif self.rss_state.objectId == 18446744073709551615:
- return None # "Border Right"
- else:
- return world.get_actor(self.rss_state.objectId)
-
- def __str__(self):
- return "RssStateInfo: object=" + str(self.rss_state.objectId) + " dangerous=" + str(self.is_dangerous)
-
-
-class RssSensor(object):
-
- def __init__(self, parent_actor, world, unstructured_scene_visualizer, bounding_box_visualizer, state_visualizer, routing_targets=None):
- self.sensor = None
- self.unstructured_scene_visualizer = unstructured_scene_visualizer
- self.bounding_box_visualizer = bounding_box_visualizer
- self._parent = parent_actor
- self.timestamp = None
- self.response_valid = False
- self.proper_response = None
- self.rss_state_snapshot = None
- self.situation_snapshot = None
- self.world_model = None
- self.individual_rss_states = []
- self._allowed_heading_ranges = []
- self.ego_dynamics_on_route = None
- self.current_vehicle_parameters = self.get_default_parameters()
- self.route = None
- self.debug_visualizer = RssDebugVisualizer(parent_actor, world)
- self.state_visualizer = state_visualizer
- self.change_to_unstructured_position_map = dict()
-
- # get max steering angle
- physics_control = parent_actor.get_physics_control()
- self._max_steer_angle = 0.0
- for wheel in physics_control.wheels:
- if wheel.max_steer_angle > self._max_steer_angle:
- self._max_steer_angle = wheel.max_steer_angle
- self._max_steer_angle = math.radians(self._max_steer_angle)
-
- world = self._parent.get_world()
- bp = world.get_blueprint_library().find('sensor.other.rss')
- self.sensor = world.spawn_actor(bp, carla.Transform(carla.Location(x=0.0, z=0.0)), attach_to=self._parent)
- # We need to pass the lambda a weak reference to self to avoid circular
- # reference.
-
- def check_rss_class(clazz):
- return inspect.isclass(clazz) and "RssSensor" in clazz.__name__
-
- if not inspect.getmembers(carla, check_rss_class):
- raise RuntimeError('CARLA PythonAPI not compiled in RSS variant, please "make PythonAPI.rss"')
-
- self.log_level = carla.RssLogLevel.warn
- self.map_log_level = carla.RssLogLevel.warn
-
- self.set_default_parameters()
-
- self.sensor.register_actor_constellation_callback(self._on_actor_constellation_request)
-
- self.sensor.listen(self._on_rss_response)
- self.sensor.set_log_level(self.log_level)
- self.sensor.set_map_log_level(self.map_log_level)
-
- # only relevant if actor constellation callback is not registered
- # self.sensor.ego_vehicle_dynamics = self.current_vehicle_parameters
-
- self.sensor.road_boundaries_mode = carla.RssRoadBoundariesMode.Off
-
- self.sensor.reset_routing_targets()
- if routing_targets:
- for target in routing_targets:
- self.sensor.append_routing_target(target)
-
- def _on_actor_constellation_request(self, actor_constellation_data):
- # print("_on_actor_constellation_request: ", str(actor_constellation_data))
-
- actor_constellation_result = carla.RssActorConstellationResult()
- actor_constellation_result.rss_calculation_mode = ad.rss.map.RssMode.NotRelevant
- actor_constellation_result.restrict_speed_limit_mode = ad.rss.map.RssSceneCreation.RestrictSpeedLimitMode.IncreasedSpeedLimit10
- actor_constellation_result.ego_vehicle_dynamics = self.current_vehicle_parameters
- actor_constellation_result.actor_object_type = ad.rss.world.ObjectType.Invalid
- actor_constellation_result.actor_dynamics = self.current_vehicle_parameters
-
- actor_id = -1
- # actor_type_id = "none"
- if actor_constellation_data.other_actor != None:
- actor_id = actor_constellation_data.other_actor.id
- # actor_type_id = actor_constellation_data.other_actor.type_id
-
- ego_on_the_sidewalk = False
- ego_on_routeable_road = False
- for occupied_region in actor_constellation_data.ego_match_object.mapMatchedBoundingBox.laneOccupiedRegions:
- lane = ad.map.lane.getLane(occupied_region.laneId)
- if lane.type == ad.map.lane.LaneType.PEDESTRIAN:
- # if not ego_on_the_sidewalk:
- # print ( "ego-{} on lane of lane type {} => sidewalk".format(actor_id, lane.type))
- ego_on_the_sidewalk = True
- elif ad.map.lane.isRouteable(lane):
- # if not ego_on_routeable_road:
- # print ( "ego-{} on lane of lane type {} => road".format(actor_id, lane.type))
- ego_on_routeable_road = True
-
- if 'walker.pedestrian' in actor_constellation_data.other_actor.type_id:
- # determine if the pedestrian is walking on the sidewalk or on the road
- pedestrian_on_the_road = False
- pedestrian_on_the_sidewalk = False
- for occupied_region in actor_constellation_data.other_match_object.mapMatchedBoundingBox.laneOccupiedRegions:
- lane = ad.map.lane.getLane(occupied_region.laneId)
- if lane.type == ad.map.lane.LaneType.PEDESTRIAN:
- # if not pedestrian_on_the_sidewalk:
- # print ( "pedestrian-{} on lane of lane type {} => sidewalk".format(actor_id, lane.type))
- pedestrian_on_the_sidewalk = True
- else:
- # if not pedestrian_on_the_road:
- # print ( "pedestrian-{} on lane of lane type {} => road".format(actor_id, lane.type))
- pedestrian_on_the_road = True
- if ego_on_routeable_road and not ego_on_the_sidewalk and not pedestrian_on_the_road and pedestrian_on_the_sidewalk:
- # pedestrian is not on the road, but on the sidewalk: then common sense is that vehicle has priority
- # This analysis can and should be done more detailed, but this is a basic starting point for the decision
- # In addition, the road network has to be correct to work best
- # (currently there are no sidewalks in intersection areas)
- # print ( "pedestrian-{} Off".format(actor_id))
- actor_constellation_result.rss_calculation_mode = ad.rss.map.RssMode.NotRelevant
- else:
- # print ( "pedestrian-{} Unstructured".format(actor_id))
- actor_constellation_result.rss_calculation_mode = ad.rss.map.RssMode.Unstructured
- actor_constellation_result.actor_object_type = ad.rss.world.ObjectType.Pedestrian
- actor_constellation_result.actor_dynamics = self.get_pedestrian_parameters()
- elif 'vehicle' in actor_constellation_data.other_actor.type_id:
- actor_constellation_result.actor_object_type = ad.rss.world.ObjectType.OtherVehicle
-
- # set the response time of others vehicles to 2 seconds; the rest stays the same
- actor_constellation_result.actor_dynamics.responseTime = 2.0
-
- # per default, if ego is not on the road -> unstructured
- if ego_on_routeable_road:
- actor_constellation_result.rss_calculation_mode = ad.rss.map.RssMode.Structured
- else:
- # print("vehicle-{} unstructured: reason other ego not on routeable road".format(actor_id))
- actor_constellation_result.rss_calculation_mode = ad.rss.map.RssMode.Unstructured
-
- # special handling for vehicles standing still
- actor_vel = actor_constellation_data.other_actor.get_velocity()
- actor_speed = math.sqrt(actor_vel.x**2 + actor_vel.y**2 + actor_vel.z**2)
- if actor_speed < 0.01:
- # reduce response time
- actor_constellation_result.actor_dynamics.responseTime = 1.0
- # still in structured?
- if actor_constellation_result.rss_calculation_mode == ad.rss.map.RssMode.Structured:
-
- actor_distance = math.sqrt(float(actor_constellation_data.ego_match_object.enuPosition.centerPoint.x -
- actor_constellation_data.other_match_object.enuPosition.centerPoint.x)**2 +
- float(actor_constellation_data.ego_match_object.enuPosition.centerPoint.y -
- actor_constellation_data.other_match_object.enuPosition.centerPoint.y)**2)
- # print("vehicle-{} unstructured check: other distance {}".format(actor_id, actor_distance))
-
- if actor_constellation_data.ego_dynamics_on_route.ego_speed < 0.01:
- # both vehicles stand still, so we have to analyze in detail if we possibly want to use
- # unstructured mode to cope with blockades on the road...
-
- if actor_distance < 10:
- # the other has to be near enough to trigger a switch to unstructured
- other_outside_routeable_road = False
- for occupied_region in actor_constellation_data.other_match_object.mapMatchedBoundingBox.laneOccupiedRegions:
- lane = ad.map.lane.getLane(occupied_region.laneId)
- if not ad.map.lane.isRouteable(lane):
- other_outside_routeable_road = True
-
- if other_outside_routeable_road:
- # if the other is somewhat outside the standard routeable road (e.g. parked at the side, ...)
- # we immediately decide for unstructured
- # print("vehicle-{} unstructured: reason other outside routeable
- # road".format(actor_id))
- actor_constellation_result.rss_calculation_mode = ad.rss.map.RssMode.Unstructured
- else:
- # otherwise we have to look in the orientation delta in addition to get some basic idea of the
- # constellation (we don't want to go into unstructured if we both waiting
- # behind a red light...)
- heading_delta = abs(float(actor_constellation_data.ego_match_object.enuPosition.heading -
- actor_constellation_data.other_match_object.enuPosition.heading))
- if heading_delta > 0.2: # around 11 degree
- # print("vehicle-{} unstructured: reason heading delta
- # {}".format(actor_id, heading_delta))
- actor_constellation_result.rss_calculation_mode = ad.rss.map.RssMode.Unstructured
- self.change_to_unstructured_position_map[
- actor_id] = actor_constellation_data.other_match_object.enuPosition
- else:
- # ego moves
- if actor_distance < 10:
- # if the ego moves, the other actor doesn't move an the mode was
- # previously set to unstructured, keep it
- try:
- if self.change_to_unstructured_position_map[actor_id] == actor_constellation_data.other_match_object.enuPosition:
- heading_delta = abs(float(actor_constellation_data.ego_match_object.enuPosition.heading -
- actor_constellation_data.other_match_object.enuPosition.heading))
- if heading_delta > 0.2:
- actor_constellation_result.rss_calculation_mode = ad.rss.map.RssMode.Unstructured
- else:
- del self.change_to_unstructured_position_map[actor_id]
- except (AttributeError, KeyError):
- pass
- else:
- if actor_id in self.change_to_unstructured_position_map:
- del self.change_to_unstructured_position_map[actor_id]
-
- # still in structured?
- if actor_constellation_result.rss_calculation_mode == ad.rss.map.RssMode.Structured:
- # in structured case we have to cope with not yet implemented lateral intersection checks in core RSS implementation
- # if the other is standing still, we don't assume that he will accelerate
- # otherwise if standing at the intersection the acceleration within reaction time
- # will allow to enter the intersection which current RSS implementation will immediately consider
- # as dangerous
- # print("_on_actor_constellation_result({}) setting accelMax to
- # zero".format(actor_constellation_data.other_actor.id))
- actor_constellation_result.actor_dynamics.alphaLon.accelMax = 0.
- actor_constellation_result.actor_dynamics.alphaLat.accelMax = 0.
- else:
- # store route for debug drawings
- self.route = actor_constellation_data.ego_route
- # since the ego vehicle is controlled manually, it is easy possible that the ego vehicle
- # accelerates far more in lateral direction than the ego_dynamics indicate
- # in an automated vehicle this would be considered by the low-level controller when the RSS restriction
- # is taken into account properly
- # but the simple RSS restrictor within CARLA is not able to do so...
- # So we should at least tell RSS about the fact that we acceleration more than this
- # to be able to react on this
- abs_avg_route_accel_lat = abs(float(actor_constellation_data.ego_dynamics_on_route.avg_route_accel_lat))
- if abs_avg_route_accel_lat > actor_constellation_result.ego_vehicle_dynamics.alphaLat.accelMax:
- # print("!! Route lateral dynamics exceed expectations: route:{} expected:{} !!".format(abs_avg_route_accel_lat,
- # actor_constellation_result.ego_vehicle_dynamics.alphaLat.accelMax))
- actor_constellation_result.ego_vehicle_dynamics.alphaLat.accelMax = min(20., abs_avg_route_accel_lat)
-
- # print("_on_actor_constellation_result({}-{}): ".format(actor_id,
- # actor_type_id), str(actor_constellation_result))
- return actor_constellation_result
-
- def destroy(self):
- if self.sensor:
- print("Stopping RSS sensor")
- self.sensor.stop()
- print("Deleting Scene Visualizer")
- self.unstructured_scene_visualizer = None
- print("Destroying RSS sensor")
- self.sensor.destroy()
- print("Destroyed RSS sensor")
-
- def toggle_debug_visualization_mode(self):
- self.debug_visualizer.toggleMode()
-
- def increase_log_level(self):
- print("inccrease {}".format(self.log_level))
- if self.log_level < carla.RssLogLevel.off:
- self.log_level = self.log_level+1
- self.sensor.set_log_level(self.log_level)
-
- def decrease_log_level(self):
- print("decrease {}".format(self.log_level))
- if self.log_level > carla.RssLogLevel.trace:
- self.log_level = self.log_level-1
- self.sensor.set_log_level(self.log_level)
-
- def increase_map_log_level(self):
- if self.map_log_level < carla.RssLogLevel.off:
- self.map_log_level = self.map_log_level+1
- self.sensor.set_map_log_level(self.map_log_level)
-
- def decrease_map_log_level(self):
- if self.map_log_level > carla.RssLogLevel.trace:
- self.map_log_level = self.map_log_level-1
- self.sensor.set_map_log_level(self.map_log_level)
-
- def drop_route(self):
- self.sensor.drop_route()
-
- @staticmethod
- def get_default_parameters():
- ego_dynamics = ad.rss.world.RssDynamics()
- ego_dynamics.alphaLon.accelMax = 5
- ego_dynamics.alphaLon.brakeMax = -8
- ego_dynamics.alphaLon.brakeMin = -4
- ego_dynamics.alphaLon.brakeMinCorrect = -3
- ego_dynamics.alphaLat.accelMax = 0.2
- ego_dynamics.alphaLat.brakeMin = -0.8
- ego_dynamics.lateralFluctuationMargin = 0.1
- ego_dynamics.responseTime = 0.5
- ego_dynamics.maxSpeedOnAcceleration = 100
- ego_dynamics.unstructuredSettings.pedestrianTurningRadius = 2.0
- ego_dynamics.unstructuredSettings.driveAwayMaxAngle = 2.4
- ego_dynamics.unstructuredSettings.vehicleYawRateChange = 1.3
- ego_dynamics.unstructuredSettings.vehicleMinRadius = 3.5
- ego_dynamics.unstructuredSettings.vehicleTrajectoryCalculationStep = 0.2
- ego_dynamics.unstructuredSettings.vehicleFrontIntermediateYawRateChangeRatioSteps = 4
- ego_dynamics.unstructuredSettings.vehicleBackIntermediateYawRateChangeRatioSteps = 0
- ego_dynamics.unstructuredSettings.vehicleContinueForwardIntermediateAccelerationSteps = 3
- ego_dynamics.unstructuredSettings.vehicleBrakeIntermediateAccelerationSteps = 3
- ego_dynamics.unstructuredSettings.pedestrianTurningRadius = 2.0
- ego_dynamics.unstructuredSettings.pedestrianContinueForwardIntermediateHeadingChangeRatioSteps = 3
- ego_dynamics.unstructuredSettings.pedestrianContinueForwardIntermediateAccelerationSteps = 0
- ego_dynamics.unstructuredSettings.pedestrianBrakeIntermediateAccelerationSteps = 3
- ego_dynamics.unstructuredSettings.pedestrianFrontIntermediateHeadingChangeRatioSteps = 4
- ego_dynamics.unstructuredSettings.pedestrianBackIntermediateHeadingChangeRatioSteps = 0
- return ego_dynamics
-
- def set_default_parameters(self):
- print("Use 'default' RSS Parameters")
- self.current_vehicle_parameters = self.get_default_parameters()
-
- @staticmethod
- def get_pedestrian_parameters():
- pedestrian_dynamics = ad.rss.world.RssDynamics()
- pedestrian_dynamics.alphaLon.accelMax = 2.0
- pedestrian_dynamics.alphaLon.brakeMax = -2.0
- pedestrian_dynamics.alphaLon.brakeMin = -2.0
- pedestrian_dynamics.alphaLon.brakeMinCorrect = -2.0
- pedestrian_dynamics.alphaLat.accelMax = 0.001
- pedestrian_dynamics.alphaLat.brakeMin = -0.001
- pedestrian_dynamics.lateralFluctuationMargin = 0.1
- pedestrian_dynamics.responseTime = 0.8
- pedestrian_dynamics.maxSpeedOnAcceleration = 10
- pedestrian_dynamics.unstructuredSettings.pedestrianTurningRadius = 2.0
- pedestrian_dynamics.unstructuredSettings.driveAwayMaxAngle = 2.4
- pedestrian_dynamics.unstructuredSettings.pedestrianContinueForwardIntermediateHeadingChangeRatioSteps = 3
- pedestrian_dynamics.unstructuredSettings.pedestrianContinueForwardIntermediateAccelerationSteps = 0
- pedestrian_dynamics.unstructuredSettings.pedestrianBrakeIntermediateAccelerationSteps = 3
- pedestrian_dynamics.unstructuredSettings.pedestrianFrontIntermediateHeadingChangeRatioSteps = 4
- pedestrian_dynamics.unstructuredSettings.pedestrianBackIntermediateHeadingChangeRatioSteps = 0
-
- #not used:
- pedestrian_dynamics.unstructuredSettings.vehicleYawRateChange = 1.3
- pedestrian_dynamics.unstructuredSettings.vehicleMinRadius = 3.5
- pedestrian_dynamics.unstructuredSettings.vehicleTrajectoryCalculationStep = 0.2
- pedestrian_dynamics.unstructuredSettings.vehicleFrontIntermediateYawRateChangeRatioSteps = 4
- pedestrian_dynamics.unstructuredSettings.vehicleBackIntermediateYawRateChangeRatioSteps = 0
- pedestrian_dynamics.unstructuredSettings.vehicleContinueForwardIntermediateAccelerationSteps = 3
- pedestrian_dynamics.unstructuredSettings.vehicleBrakeIntermediateAccelerationSteps = 3
- return pedestrian_dynamics
-
- def get_steering_ranges(self):
- ranges = []
- for heading_range in self._allowed_heading_ranges:
- ranges.append(
- (
- (float(self.ego_dynamics_on_route.ego_heading) - float(heading_range.begin)) / self._max_steer_angle,
- (float(self.ego_dynamics_on_route.ego_heading) - float(heading_range.end)) / self._max_steer_angle)
- )
- return ranges
-
- def _on_rss_response(self, response):
- if not self or not response:
- return
- delta_time = 0.1
- if self.timestamp:
- delta_time = response.timestamp - self.timestamp
- if delta_time > -0.05:
- self.timestamp = response.timestamp
- self.response_valid = response.response_valid
- self.proper_response = response.proper_response
- self.ego_dynamics_on_route = response.ego_dynamics_on_route
- self.rss_state_snapshot = response.rss_state_snapshot
- self.situation_snapshot = response.situation_snapshot
- self.world_model = response.world_model
-
- # calculate the allowed heading ranges:
- if response.proper_response.headingRanges:
- heading = float(response.ego_dynamics_on_route.ego_heading)
- heading_ranges = response.proper_response.headingRanges
- steering_range = ad.rss.state.HeadingRange()
- steering_range.begin = - self._max_steer_angle + heading
- steering_range.end = self._max_steer_angle + heading
- ad.rss.unstructured.getHeadingOverlap(steering_range, heading_ranges)
- self._allowed_heading_ranges = heading_ranges
- else:
- self._allowed_heading_ranges = []
-
- if self.unstructured_scene_visualizer:
- self.unstructured_scene_visualizer.tick(response.frame, response, self._allowed_heading_ranges)
-
- new_states = []
- for rss_state in response.rss_state_snapshot.individualResponses:
- new_states.append(RssStateInfo(rss_state, response.ego_dynamics_on_route, response.world_model))
- if len(new_states) > 0:
- new_states.sort(key=lambda rss_states: rss_states.distance)
- self.individual_rss_states = new_states
- if self.bounding_box_visualizer:
- self.bounding_box_visualizer.tick(response.frame, self.individual_rss_states)
- if self.state_visualizer:
- self.state_visualizer.tick(self.individual_rss_states)
- self.debug_visualizer.tick(self.route, not response.proper_response.isSafe,
- self.individual_rss_states, self.ego_dynamics_on_route)
-
- else:
- print("ignore outdated response {}".format(delta_time))
diff --git a/PythonAPI/examples/rss/rss_visualization.py b/PythonAPI/examples/rss/rss_visualization.py
deleted file mode 100644
index 564e1673244..00000000000
--- a/PythonAPI/examples/rss/rss_visualization.py
+++ /dev/null
@@ -1,738 +0,0 @@
-#!/usr/bin/env python
-#
-# Copyright (c) 2020 Intel Corporation
-#
-
-import glob
-import os
-import sys
-
-try:
- sys.path.append(glob.glob(os.path.dirname(os.path.dirname(os.path.dirname(os.path.abspath(__file__)))) + '/carla/dist/carla-*%d.%d-%s.egg' % (
- sys.version_info.major,
- sys.version_info.minor,
- 'win-amd64' if os.name == 'nt' else 'linux-x86_64'))[0])
-except IndexError:
- pass
-
-from enum import Enum
-import math
-import numpy as np
-import pygame
-import weakref
-import carla
-from carla import ad
-
-
-class RssStateVisualizer(object):
-
- def __init__(self, display_dimensions, font, world):
- self._surface = None
- self._display_dimensions = display_dimensions
- self._font = font
- self._world = world
-
- def tick(self, individual_rss_states):
- state_surface = pygame.Surface((220, self._display_dimensions[1]))
- state_surface.set_colorkey(pygame.Color('black'))
- v_offset = 0
-
- if individual_rss_states:
- surface = self._font.render('RSS States:', True, (255, 255, 255))
- state_surface.blit(surface, (8, v_offset))
- v_offset += 26
- for state in individual_rss_states:
- object_name = "Obj"
- if state.rss_state.objectId == 18446744073709551614:
- object_name = "Border Left"
- elif state.rss_state.objectId == 18446744073709551615:
- object_name = "Border Right"
- else:
- other_actor = state.get_actor(self._world)
- if other_actor:
- li = list(other_actor.type_id.split("."))
- if li:
- li.pop(0)
- li = [element.capitalize() for element in li]
-
- object_name = " ".join(li).strip()[:15]
-
- mode = "?"
- if state.actor_calculation_mode == ad.rss.map.RssMode.Structured:
- mode = "S"
- elif state.actor_calculation_mode == ad.rss.map.RssMode.Unstructured:
- mode = "U"
- elif state.actor_calculation_mode == ad.rss.map.RssMode.NotRelevant:
- mode = "-"
- item = '%4s % 2dm %8s' % (mode, state.distance, object_name)
-
- surface = self._font.render(item, True, (255, 255, 255))
- state_surface.blit(surface, (5, v_offset))
- color = (128, 128, 128)
- if state.actor_calculation_mode != ad.rss.map.RssMode.NotRelevant:
- if state.is_dangerous:
- color = (255, 0, 0)
- else:
- color = (0, 255, 0)
- pygame.draw.circle(state_surface, color, (12, v_offset + 7), 5)
- xpos = 184
- if state.actor_calculation_mode == ad.rss.map.RssMode.Structured:
- if not state.rss_state.longitudinalState.isSafe and ((state.rss_state.longitudinalState.rssStateInformation.evaluator == "LongitudinalDistanceSameDirectionOtherInFront") or (state.rss_state.longitudinalState.rssStateInformation.evaluator == "LongitudinalDistanceSameDirectionEgoFront")):
- pygame.draw.polygon(
- state_surface, (
- 255, 255, 255), ((xpos + 1, v_offset + 1 + 4), (xpos + 6, v_offset + 1 + 0), (xpos + 11, v_offset + 1 + 4),
- (xpos + 7, v_offset + 1 + 4), (xpos + 7, v_offset + 1 + 12), (xpos + 5, v_offset + 1 + 12), (xpos + 5, v_offset + 1 + 4)))
- xpos += 14
-
- if not state.rss_state.longitudinalState.isSafe and ((state.rss_state.longitudinalState.rssStateInformation.evaluator == "LongitudinalDistanceOppositeDirectionEgoCorrectLane") or (state.rss_state.longitudinalState.rssStateInformation.evaluator == "LongitudinalDistanceOppositeDirection")):
- pygame.draw.polygon(
- state_surface, (
- 255, 255, 255), ((xpos + 2, v_offset + 1 + 8), (xpos + 6, v_offset + 1 + 12), (xpos + 10, v_offset + 1 + 8),
- (xpos + 7, v_offset + 1 + 8), (xpos + 7, v_offset + 1 + 0), (xpos + 5, v_offset + 1 + 0), (xpos + 5, v_offset + 1 + 8)))
- xpos += 14
-
- if not state.rss_state.lateralStateRight.isSafe and not (state.rss_state.lateralStateRight.rssStateInformation.evaluator == "None"):
- pygame.draw.polygon(
- state_surface, (
- 255, 255, 255), ((xpos + 0, v_offset + 1 + 4), (xpos + 8, v_offset + 1 + 4), (xpos + 8, v_offset + 1 + 1),
- (xpos + 12, v_offset + 1 + 6), (xpos + 8, v_offset + 1 + 10), (xpos + 8, v_offset + 1 + 8), (xpos + 0, v_offset + 1 + 8)))
- xpos += 14
- if not state.rss_state.lateralStateLeft.isSafe and not (state.rss_state.lateralStateLeft.rssStateInformation.evaluator == "None"):
- pygame.draw.polygon(
- state_surface, (
- 255, 255, 255), ((xpos + 0, v_offset + 1 + 6), (xpos + 4, v_offset + 1 + 1), (xpos + 4, v_offset + 1 + 4),
- (xpos + 12, v_offset + 1 + 4), (xpos + 12, v_offset + 1 + 8), (xpos + 4, v_offset + 1 + 8), (xpos + 4, v_offset + 1 + 10)))
- xpos += 14
- elif state.actor_calculation_mode == ad.rss.map.RssMode.Unstructured:
- text = ""
- if state.rss_state.unstructuredSceneState.response == ad.rss.state.UnstructuredSceneResponse.DriveAway:
- text = " D"
- elif state.rss_state.unstructuredSceneState.response == ad.rss.state.UnstructuredSceneResponse.ContinueForward:
- text = " C"
- elif state.rss_state.unstructuredSceneState.response == ad.rss.state.UnstructuredSceneResponse.Brake:
- text = " B"
- surface = self._font.render(text, True, (255, 255, 255))
- state_surface.blit(surface, (xpos, v_offset))
-
- v_offset += 14
- self._surface = state_surface
-
- def render(self, display, v_offset):
- if self._surface:
- display.blit(self._surface, (0, v_offset))
-
-
-def get_matrix(transform):
- """
- Creates matrix from carla transform.
- """
-
- rotation = transform.rotation
- location = transform.location
- c_y = np.cos(np.radians(rotation.yaw))
- s_y = np.sin(np.radians(rotation.yaw))
- c_r = np.cos(np.radians(rotation.roll))
- s_r = np.sin(np.radians(rotation.roll))
- c_p = np.cos(np.radians(rotation.pitch))
- s_p = np.sin(np.radians(rotation.pitch))
- matrix = np.matrix(np.identity(4))
- matrix[0, 3] = location.x
- matrix[1, 3] = location.y
- matrix[2, 3] = location.z
- matrix[0, 0] = c_p * c_y
- matrix[0, 1] = c_y * s_p * s_r - s_y * c_r
- matrix[0, 2] = -c_y * s_p * c_r - s_y * s_r
- matrix[1, 0] = s_y * c_p
- matrix[1, 1] = s_y * s_p * s_r + c_y * c_r
- matrix[1, 2] = -s_y * s_p * c_r + c_y * s_r
- matrix[2, 0] = s_p
- matrix[2, 1] = -c_p * s_r
- matrix[2, 2] = c_p * c_r
- return matrix
-
-# ==============================================================================
-# -- RssUnstructuredSceneVisualizer ------------------------------------------------
-# ==============================================================================
-
-
-class RssUnstructuredSceneVisualizerMode(Enum):
- disabled = 1
- window = 2
- fullscreen = 3
-
-
-class RssUnstructuredSceneVisualizer(object):
-
- def __init__(self, parent_actor, world, display_dimensions):
- self._last_rendered_frame = -1
- self._surface = None
- self._current_rss_surface = None
- self.current_camera_surface = (0, None)
- self._world = world
- self._parent_actor = parent_actor
- self._display_dimensions = display_dimensions
- self._camera = None
- self._mode = RssUnstructuredSceneVisualizerMode.disabled
-
- self.restart(RssUnstructuredSceneVisualizerMode.window)
-
- def destroy(self):
- if self._camera:
- self._camera.stop()
- self._camera.destroy()
- self._camera = None
-
- def restart(self, mode):
- # setup up top down camera
- self.destroy()
- self._mode = mode
-
- spawn_sensor = False
- if mode == RssUnstructuredSceneVisualizerMode.window:
- self._dim = (self._display_dimensions[0] / 3, self._display_dimensions[1] / 2)
- spawn_sensor = True
- elif mode == RssUnstructuredSceneVisualizerMode.fullscreen:
- self._dim = (self._display_dimensions[0], self._display_dimensions[1])
- spawn_sensor = True
- else:
- self._surface = None
-
- if spawn_sensor:
- self._calibration = np.identity(3)
- self._calibration[0, 2] = self._dim[0] / 2.0
- self._calibration[1, 2] = self._dim[1] / 2.0
- self._calibration[0, 0] = self._calibration[1, 1] = self._dim[0] / \
- (2.0 * np.tan(90.0 * np.pi / 360.0)) # fov default: 90.0
-
- bp_library = self._world.get_blueprint_library()
- bp = bp_library.find('sensor.camera.rgb')
- bp.set_attribute('image_size_x', str(self._dim[0]))
- bp.set_attribute('image_size_y', str(self._dim[1]))
-
- self._camera = self._world.spawn_actor(
- bp,
- carla.Transform(carla.Location(x=7.5, z=10), carla.Rotation(pitch=-90)),
- attach_to=self._parent_actor)
- # We need to pass the lambda a weak reference to self to avoid
- # circular reference.
- weak_self = weakref.ref(self)
- self._camera.listen(lambda image: self._parse_image(weak_self, image))
-
- def update_surface(self, cam_frame, rss_frame):
- if self._mode == RssUnstructuredSceneVisualizerMode.disabled:
- return
- render = False
-
- if cam_frame and self._current_rss_surface and self._current_rss_surface[0] == cam_frame:
- render = True
-
- if rss_frame and self.current_camera_surface and self.current_camera_surface[0] == rss_frame:
- render = True
-
- if render:
- surface = self.current_camera_surface[1]
- surface.blit(self._current_rss_surface[1], (0, 0))
- rect = pygame.Rect((0, 0), (2, surface.get_height()))
- pygame.draw.rect(surface, (0, 0, 0), rect, 0)
- rect = pygame.Rect((0, 0), (surface.get_width(), 2))
- pygame.draw.rect(surface, (0, 0, 0), rect, 0)
- rect = pygame.Rect((0, surface.get_height() - 2), (surface.get_width(), surface.get_height()))
- pygame.draw.rect(surface, (0, 0, 0), rect, 0)
- rect = pygame.Rect((surface.get_width() - 2, 0), (surface.get_width(), surface.get_width()))
- pygame.draw.rect(surface, (0, 0, 0), rect, 0)
- self._surface = surface
-
- def toggle_camera(self):
- print("Toggle RssUnstructuredSceneVisualizer")
- if self._mode == RssUnstructuredSceneVisualizerMode.window:
- self.restart(RssUnstructuredSceneVisualizerMode.fullscreen)
- elif self._mode == RssUnstructuredSceneVisualizerMode.fullscreen:
- self.restart(RssUnstructuredSceneVisualizerMode.disabled)
- elif self._mode == RssUnstructuredSceneVisualizerMode.disabled:
- self.restart(RssUnstructuredSceneVisualizerMode.window)
-
- @staticmethod
- def _parse_image(weak_self, image):
- self = weak_self()
- if not self:
- return
- image.convert(carla.ColorConverter.Raw)
- array = np.frombuffer(image.raw_data, dtype=np.dtype("uint8"))
- array = np.reshape(array, (image.height, image.width, 4))
- array = array[:, :, :3]
- array = array[:, :, ::-1]
- surface = pygame.surfarray.make_surface(array.swapaxes(0, 1))
- self.current_camera_surface = (image.frame, surface)
- self.update_surface(image.frame, None)
-
- @staticmethod
- def rotate_around_point(xy, radians, origin):
- """Rotate a point around a given point.
- """
- x, y = xy
- offset_x, offset_y = origin
- adjusted_x = (x - offset_x)
- adjusted_y = (y - offset_y)
- cos_rad = math.cos(radians)
- sin_rad = math.sin(radians)
- qx = offset_x + cos_rad * adjusted_x - sin_rad * adjusted_y
- qy = offset_y + sin_rad * adjusted_x + cos_rad * adjusted_y
-
- return qx, qy
-
- def tick(self, frame, rss_response, allowed_heading_ranges):
- if not self._camera:
- return
- surface = pygame.Surface(self._dim)
- surface.set_colorkey(pygame.Color('black'))
- surface.set_alpha(180)
- try:
- lines = RssUnstructuredSceneVisualizer.get_trajectory_sets(
- rss_response.rss_state_snapshot, self._camera.get_transform(), self._calibration)
-
- polygons = []
- for heading_range in allowed_heading_ranges:
- polygons.append((RssUnstructuredSceneVisualizer.transform_points(
- RssUnstructuredSceneVisualizer._get_points_from_pairs(
- RssUnstructuredSceneVisualizer.draw_heading_range(
- heading_range, rss_response.ego_dynamics_on_route)),
- self._camera.get_transform(), self._calibration), (0, 0, 255)))
-
- RssUnstructuredSceneVisualizer.draw_lines(surface, lines)
- RssUnstructuredSceneVisualizer.draw_polygons(surface, polygons)
-
- except RuntimeError as e:
- print("ERROR {}".format(e))
- self._current_rss_surface = (frame, surface)
- self.update_surface(None, frame)
-
- def render(self, display):
- if self._surface:
- display.blit(self._surface, (display.get_width() - self._dim[0], 0))
-
- @staticmethod
- def draw_heading_range(heading_range, ego_dynamics_on_route):
- line = [(float(ego_dynamics_on_route.ego_center.x), float(ego_dynamics_on_route.ego_center.y))]
- length = 3.0
- current_angle = float(heading_range.begin)
- max_angle = float(heading_range.end)
- if heading_range.end < heading_range.begin:
- max_angle += 2.0 * np.pi
-
- while current_angle < max_angle:
- line.append((float(ego_dynamics_on_route.ego_center.x) + length * np.cos(current_angle),
- float(ego_dynamics_on_route.ego_center.y) + length * np.sin(current_angle)))
- current_angle += 0.2
-
- if current_angle != max_angle:
- line.append((float(ego_dynamics_on_route.ego_center.x) + length * np.cos(max_angle),
- float(ego_dynamics_on_route.ego_center.y) + length * np.sin(max_angle)))
-
- line.append((float(ego_dynamics_on_route.ego_center.x), float(ego_dynamics_on_route.ego_center.y)))
- return line
-
- @staticmethod
- def get_trajectory_sets(rss_state_snapshot, camera_transform, calibration):
- """
- Creates 3D bounding boxes based on carla vehicle list and camera.
- """
- trajectory_sets = []
-
- # ego
- trajectory_sets.append((RssUnstructuredSceneVisualizer.transform_points(RssUnstructuredSceneVisualizer._get_trajectory_set_points(
- rss_state_snapshot.unstructuredSceneEgoInformation.brakeTrajectorySet), camera_transform, calibration), (255, 0, 0)))
- trajectory_sets.append((RssUnstructuredSceneVisualizer.transform_points(RssUnstructuredSceneVisualizer._get_trajectory_set_points(
- rss_state_snapshot.unstructuredSceneEgoInformation.continueForwardTrajectorySet), camera_transform, calibration), (0, 255, 0)))
-
- # others
- for state in rss_state_snapshot.individualResponses:
- if state.unstructuredSceneState.rssStateInformation.brakeTrajectorySet:
- trajectory_sets.append((RssUnstructuredSceneVisualizer.transform_points(RssUnstructuredSceneVisualizer._get_trajectory_set_points(
- state.unstructuredSceneState.rssStateInformation.brakeTrajectorySet), camera_transform, calibration), (255, 0, 0)))
- if state.unstructuredSceneState.rssStateInformation.continueForwardTrajectorySet:
- trajectory_sets.append((RssUnstructuredSceneVisualizer.transform_points(RssUnstructuredSceneVisualizer._get_trajectory_set_points(
- state.unstructuredSceneState.rssStateInformation.continueForwardTrajectorySet), camera_transform, calibration), (0, 255, 0)))
-
- return trajectory_sets
-
- @staticmethod
- def draw_lines(surface, lines):
- """
- Draws lines on pygame display.
- """
- for line, color in lines:
- if len(line) > 1:
- pygame.draw.lines(surface, color, True, line, 2)
-
- @staticmethod
- def draw_polygons(surface, polygons):
- """
- Draws polygons on pygame display.
- """
- for polygon, color in polygons:
- if len(polygon) > 1:
- pygame.draw.polygon(surface, color, polygon)
-
- @staticmethod
- def transform_points(world_cords, camera_transform, calibration):
- """
- Returns trajectory set projected to camera view
- """
- world_cords = np.transpose(world_cords)
- cords_x_y_z = RssUnstructuredSceneVisualizer._world_to_sensor(world_cords, camera_transform)[:3, :]
- cords_y_minus_z_x = np.concatenate([cords_x_y_z[1, :], -cords_x_y_z[2, :], cords_x_y_z[0, :]])
- ts = np.transpose(np.dot(calibration, cords_y_minus_z_x))
- camera_ts = np.concatenate([ts[:, 0] / ts[:, 2], ts[:, 1] / ts[:, 2], ts[:, 2]], axis=1)
- line_to_draw = []
- for point in camera_ts:
- line_to_draw.append((int(point[0, 0]), int(point[0, 1])))
- return line_to_draw
-
- @staticmethod
- def _get_trajectory_set_points(trajectory_set):
- """
- """
- cords = np.zeros((len(trajectory_set), 4))
- i = 0
- for pt in trajectory_set:
- cords[i, :] = np.array([pt.x, -pt.y, 0, 1])
- i += 1
- return cords
-
- @staticmethod
- def _get_points_from_pairs(trajectory_set):
- """
- """
- cords = np.zeros((len(trajectory_set), 4))
- i = 0
- for pt in trajectory_set:
- cords[i, :] = np.array([pt[0], -pt[1], 0, 1])
- i += 1
- return cords
-
- @staticmethod
- def _world_to_sensor(cords, camera_transform):
- """
- Transforms world coordinates to sensor.
- """
- sensor_world_matrix = get_matrix(camera_transform)
- world_sensor_matrix = np.linalg.inv(sensor_world_matrix)
- sensor_cords = np.dot(world_sensor_matrix, cords)
- return sensor_cords
-
-# ==============================================================================
-# -- RssBoundingBoxVisualizer ------------------------------------------------------
-# ==============================================================================
-
-
-class RssBoundingBoxVisualizer(object):
-
- def __init__(self, display_dimensions, world, camera):
- self._last_camera_frame = 0
- self._surface_for_frame = []
- self._world = world
- self._dim = display_dimensions
- self._calibration = np.identity(3)
- self._calibration[0, 2] = self._dim[0] / 2.0
- self._calibration[1, 2] = self._dim[1] / 2.0
- self._calibration[0, 0] = self._calibration[1, 1] = self._dim[0] / \
- (2.0 * np.tan(90.0 * np.pi / 360.0)) # fov default: 90.0
- self._camera = camera
-
- def tick(self, frame, individual_rss_states):
- if len(self._surface_for_frame) > 0:
- try:
- while self._surface_for_frame[0][0] < self._last_camera_frame:
- self._surface_for_frame.pop(0)
- except IndexError:
- return
-
- # only render on new frame
- if len(self._surface_for_frame) > 0:
- if self._surface_for_frame[0][0] == frame:
- return
-
- surface = pygame.Surface(self._dim)
- surface.set_colorkey(pygame.Color('black'))
- surface.set_alpha(80)
- try:
- bounding_boxes = RssBoundingBoxVisualizer.get_bounding_boxes(
- individual_rss_states, self._camera.get_transform(), self._calibration, self._world)
- RssBoundingBoxVisualizer.draw_bounding_boxes(surface, bounding_boxes)
- self._surface_for_frame.append((frame, surface, len(bounding_boxes)))
- except RuntimeError:
- pass
-
- def render(self, display, current_camera_frame):
- rendered = False
- boxes_to_render = 0
- for frame, surface, box_count in self._surface_for_frame:
- if frame == current_camera_frame:
- display.blit(surface, (0, 0))
- boxes_to_render = box_count
- rendered = True
- break
- if not rendered and boxes_to_render > 0:
- print("Warning: {} bounding boxes were not drawn.".format(boxes_to_render))
- self._last_camera_frame = current_camera_frame
-
- @staticmethod
- def get_bounding_boxes(individual_rss_states, camera_transform, calibration, world):
- """
- Creates 3D bounding boxes based on carla vehicle list and camera.
- """
- bounding_boxes = []
- for state in individual_rss_states:
- if state.actor_calculation_mode != ad.rss.map.RssMode.NotRelevant and state.is_dangerous:
- other_actor = state.get_actor(world)
- if other_actor:
- bounding_boxes.append(RssBoundingBoxVisualizer.get_bounding_box(
- other_actor, camera_transform, calibration))
- # filter objects behind camera
- bounding_boxes = [bb for bb in bounding_boxes if all(bb[:, 2] > 0)]
- return bounding_boxes
-
- @staticmethod
- def draw_bounding_boxes(surface, bounding_boxes, color=pygame.Color('red')):
- """
- Draws bounding boxes on pygame display.
- """
- for bbox in bounding_boxes:
- points = [(int(bbox[i, 0]), int(bbox[i, 1])) for i in range(8)]
- # draw lines
- # base
- polygon = [points[0], points[1], points[2], points[3]]
- pygame.draw.polygon(surface, color, polygon)
- # top
- polygon = [points[4], points[5], points[6], points[7]]
- pygame.draw.polygon(surface, color, polygon)
- # base-top
- polygon = [points[0], points[1], points[5], points[4]]
- pygame.draw.polygon(surface, color, polygon)
- polygon = [points[1], points[2], points[6], points[5]]
- pygame.draw.polygon(surface, color, polygon)
- polygon = [points[2], points[6], points[7], points[3]]
- pygame.draw.polygon(surface, color, polygon)
- polygon = [points[0], points[4], points[7], points[3]]
- pygame.draw.polygon(surface, color, polygon)
-
- @staticmethod
- def get_bounding_box(vehicle, camera_transform, calibration):
- """
- Returns 3D bounding box for a vehicle based on camera view.
- """
-
- bb_cords = RssBoundingBoxVisualizer._create_bb_points(vehicle)
- cords_x_y_z = RssBoundingBoxVisualizer._vehicle_to_sensor(bb_cords, vehicle, camera_transform)[:3, :]
- cords_y_minus_z_x = np.concatenate([cords_x_y_z[1, :], -cords_x_y_z[2, :], cords_x_y_z[0, :]])
- bbox = np.transpose(np.dot(calibration, cords_y_minus_z_x))
- camera_bbox = np.concatenate([bbox[:, 0] / bbox[:, 2], bbox[:, 1] / bbox[:, 2], bbox[:, 2]], axis=1)
- return camera_bbox
-
- @staticmethod
- def _create_bb_points(vehicle):
- """
- Returns 3D bounding box for a vehicle.
- """
-
- cords = np.zeros((8, 4))
- extent = vehicle.bounding_box.extent
- cords[0, :] = np.array([extent.x, extent.y, -extent.z, 1])
- cords[1, :] = np.array([-extent.x, extent.y, -extent.z, 1])
- cords[2, :] = np.array([-extent.x, -extent.y, -extent.z, 1])
- cords[3, :] = np.array([extent.x, -extent.y, -extent.z, 1])
- cords[4, :] = np.array([extent.x, extent.y, extent.z, 1])
- cords[5, :] = np.array([-extent.x, extent.y, extent.z, 1])
- cords[6, :] = np.array([-extent.x, -extent.y, extent.z, 1])
- cords[7, :] = np.array([extent.x, -extent.y, extent.z, 1])
- return cords
-
- @staticmethod
- def _vehicle_to_sensor(cords, vehicle, camera_transform):
- """
- Transforms coordinates of a vehicle bounding box to sensor.
- """
-
- world_cord = RssBoundingBoxVisualizer._vehicle_to_world(cords, vehicle)
- sensor_cord = RssBoundingBoxVisualizer._world_to_sensor(world_cord, camera_transform)
- return sensor_cord
-
- @staticmethod
- def _vehicle_to_world(cords, vehicle):
- """
- Transforms coordinates of a vehicle bounding box to world.
- """
-
- bb_transform = carla.Transform(vehicle.bounding_box.location)
- bb_vehicle_matrix = get_matrix(bb_transform)
- vehicle_world_matrix = get_matrix(vehicle.get_transform())
- bb_world_matrix = np.dot(vehicle_world_matrix, bb_vehicle_matrix)
- world_cords = np.dot(bb_world_matrix, np.transpose(cords))
- return world_cords
-
- @staticmethod
- def _world_to_sensor(cords, camera_transform):
- """
- Transforms world coordinates to sensor.
- """
-
- sensor_world_matrix = get_matrix(camera_transform)
- world_sensor_matrix = np.linalg.inv(sensor_world_matrix)
- sensor_cords = np.dot(world_sensor_matrix, cords)
- return sensor_cords
-
-# ==============================================================================
-# -- RssDebugVisualizer ------------------------------------------------------------
-# ==============================================================================
-
-
-class RssDebugVisualizationMode(Enum):
- Off = 1
- RouteOnly = 2
- VehicleStateOnly = 3
- VehicleStateAndRoute = 4
- All = 5
-
-
-class RssDebugVisualizer(object):
-
- def __init__(self, player, world):
- self._world = world
- self._player = player
- self._visualization_mode = RssDebugVisualizationMode.Off
-
- def toggleMode(self):
- if self._visualization_mode == RssDebugVisualizationMode.All:
- self._visualization_mode = RssDebugVisualizationMode.Off
- elif self._visualization_mode == RssDebugVisualizationMode.Off:
- self._visualization_mode = RssDebugVisualizationMode.RouteOnly
- elif self._visualization_mode == RssDebugVisualizationMode.RouteOnly:
- self._visualization_mode = RssDebugVisualizationMode.VehicleStateOnly
- elif self._visualization_mode == RssDebugVisualizationMode.VehicleStateOnly:
- self._visualization_mode = RssDebugVisualizationMode.VehicleStateAndRoute
- elif self._visualization_mode == RssDebugVisualizationMode.VehicleStateAndRoute:
- self._visualization_mode = RssDebugVisualizationMode.All
- print("New Debug Visualizer Mode {}".format(self._visualization_mode))
-
- def tick(self, route, dangerous, individual_rss_states, ego_dynamics_on_route):
- if self._visualization_mode == RssDebugVisualizationMode.RouteOnly or \
- self._visualization_mode == RssDebugVisualizationMode.VehicleStateAndRoute or \
- self._visualization_mode == RssDebugVisualizationMode.All:
- self.visualize_route(dangerous, route)
-
- if self._visualization_mode == RssDebugVisualizationMode.VehicleStateOnly or \
- self._visualization_mode == RssDebugVisualizationMode.VehicleStateAndRoute or \
- self._visualization_mode == RssDebugVisualizationMode.All:
- self.visualize_rss_results(individual_rss_states)
-
- if self._visualization_mode == RssDebugVisualizationMode.All:
- self.visualize_ego_dynamics(ego_dynamics_on_route)
-
- def visualize_route(self, dangerous, route):
- if not route:
- return
- right_lane_edges = dict()
- left_lane_edges = dict()
-
- for road_segment in route.roadSegments:
- right_most_lane = road_segment.drivableLaneSegments[0]
- if right_most_lane.laneInterval.laneId not in right_lane_edges:
- edge = ad.map.route.getRightProjectedENUEdge(right_most_lane.laneInterval)
- right_lane_edges[right_most_lane.laneInterval.laneId] = edge
- intersection_lane = ad.map.intersection.Intersection.isLanePartOfAnIntersection(right_most_lane.laneInterval.laneId)
-
- color = carla.Color(r=(128 if dangerous else 255))
- if intersection_lane:
- color.b = 128 if dangerous else 255
- color = carla.Color(r=255, g=0, b=255)
- self.visualize_enu_edge(edge, color, self._player.get_location().z)
-
- left_most_lane = road_segment.drivableLaneSegments[-1]
- if left_most_lane.laneInterval.laneId not in left_lane_edges:
- edge = ad.map.route.getLeftProjectedENUEdge(left_most_lane.laneInterval)
- left_lane_edges[left_most_lane.laneInterval.laneId] = edge
- intersection_lane = ad.map.intersection.Intersection.isLanePartOfAnIntersection(left_most_lane.laneInterval.laneId)
- color = carla.Color(g=(128 if dangerous else 255))
- if intersection_lane:
- color.b = 128 if dangerous else 255
-
- self.visualize_enu_edge(edge, color, self._player.get_location().z)
-
- def visualize_enu_edge(self, edge, color, z_offset):
- for point in edge:
- carla_point = carla.Location(x=float(point.x), y=-1. * float(point.y), z=float(point.z) + z_offset)
- self._world.debug.draw_point(carla_point, 0.1, color, 0.1, False)
-
- def visualize_rss_results(self, state_snapshot):
- for state in state_snapshot:
- other_actor = state.get_actor(self._world)
- if not other_actor:
- # print("Actor not found. Skip visualizing state {}".format(state))
- continue
- ego_point = self._player.get_location()
- ego_point.z += 0.05
- yaw = self._player.get_transform().rotation.yaw
- cosine = math.cos(math.radians(yaw))
- sine = math.sin(math.radians(yaw))
- line_offset = carla.Location(-sine * 0.1, cosine * 0.1, 0.0)
-
- point = other_actor.get_location()
- point.z += 0.05
- indicator_color = carla.Color(0, 255, 0)
- dangerous = ad.rss.state.isDangerous(state.rss_state)
- if dangerous:
- indicator_color = carla.Color(255, 0, 0)
- elif state.rss_state.situationType == ad.rss.situation.SituationType.NotRelevant:
- indicator_color = carla.Color(150, 150, 150)
-
- if self._visualization_mode == RssDebugVisualizationMode.All:
- # the connection lines are only visualized if All is requested
- lon_color = indicator_color
- lat_l_color = indicator_color
- lat_r_color = indicator_color
- if not state.rss_state.longitudinalState.isSafe:
- lon_color.r = 255
- lon_color.g = 0 if dangerous else 255
- if not state.rss_state.lateralStateLeft.isSafe:
- lat_l_color.r = 255
- lat_l_color.g = 0 if dangerous else 255
- if not state.rss_state.lateralStateRight.isSafe:
- lat_r_color.r = 255
- lat_r_color.g = 0 if dangerous else 255
- self._world.debug.draw_line(ego_point, point, 0.1, lon_color, 0.02, False)
- self._world.debug.draw_line(ego_point - line_offset, point -
- line_offset, 0.1, lat_l_color, 0.02, False)
- self._world.debug.draw_line(ego_point + line_offset, point +
- line_offset, 0.1, lat_r_color, 0.02, False)
- point.z += 3.
- self._world.debug.draw_point(point, 0.2, indicator_color, 0.02, False)
-
- def visualize_ego_dynamics(self, ego_dynamics_on_route):
- color = carla.Color(0, 0, 255)
-
- sin_heading = math.sin(float(ego_dynamics_on_route.route_heading))
- cos_heading = math.cos(float(ego_dynamics_on_route.route_heading))
-
- heading_location_start = self._player.get_location()
- heading_location_start.x -= cos_heading * 10.
- heading_location_start.y += sin_heading * 10.
- heading_location_start.z += 0.5
- heading_location_end = self._player.get_location()
- heading_location_end.x += cos_heading * 10.
- heading_location_end.y -= sin_heading * 10.
- heading_location_end.z += 0.5
-
- self._world.debug.draw_arrow(heading_location_start, heading_location_end, 0.1, 0.1, color, 0.02, False)
-
- sin_center = math.sin(float(ego_dynamics_on_route.route_heading) + math.pi / 2.)
- cos_center = math.cos(float(ego_dynamics_on_route.route_heading) + math.pi / 2.)
- center_location_start = self._player.get_location()
- center_location_start.x -= cos_center * 2.
- center_location_start.y += sin_center * 2.
- center_location_start.z += 0.5
- center_location_end = self._player.get_location()
- center_location_end.x += cos_center * 2.
- center_location_end.y -= sin_center * 2.
- center_location_end.z += 0.5
-
- self._world.debug.draw_line(center_location_start, center_location_end, 0.1, color, 0.02, False)
diff --git a/PythonAPI/examples/rss/scenarios/unstructured_blocked_road.xosc b/PythonAPI/examples/rss/scenarios/unstructured_blocked_road.xosc
deleted file mode 100644
index 81556ba5b7d..00000000000
--- a/PythonAPI/examples/rss/scenarios/unstructured_blocked_road.xosc
+++ /dev/null
@@ -1,187 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/PythonAPI/examples/rss/scenarios/unstructured_pedestrian_on_road.xosc b/PythonAPI/examples/rss/scenarios/unstructured_pedestrian_on_road.xosc
deleted file mode 100644
index b8dbb2a1d20..00000000000
--- a/PythonAPI/examples/rss/scenarios/unstructured_pedestrian_on_road.xosc
+++ /dev/null
@@ -1,223 +0,0 @@
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
-
diff --git a/README.md b/README.md
index 81885bc8f7c..ee22b6d8703 100644
--- a/README.md
+++ b/README.md
@@ -131,8 +131,6 @@ CARLA specific assets are distributed under CC-BY License.
#### CARLA Dependency and Integration licenses
-The ad-rss-lib library compiled and linked by the [RSS Integration build variant](Docs/adv_rss.md) introduces [LGPL-2.1-only License](https://opensource.org/licenses/LGPL-2.1).
-
Unreal Engine 4 follows its [own license terms](https://www.unrealengine.com/en-US/faq).
CARLA uses three dependencies as part of the SUMO integration:
diff --git a/mkdocs.yml b/mkdocs.yml
index c2a5aae009d..801cfc4da80 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -44,7 +44,6 @@ nav:
- 'NVIDIA SimReady': 'ecosys_simready.md'
- 'ASAM OpenDRIVE': 'adv_opendrive.md'
- 'PTV Vissim': 'adv_ptv.md'
- - 'RSS': 'adv_rss.md'
- 'ROS': https://carla.readthedocs.io/projects/ros-bridge/en/latest/
- 'Scenic': 'tuto_G_scenic.md'
- 'SUMO': 'adv_sumo.md'