Skip to content

Model Training and Testing

mbechtel2 edited this page Aug 9, 2017 · 17 revisions

This page shows how to train a model for a self-driving car and how to test the performance of a trained model in both simulated and real-world environments.

Model Training

For training the model, we use deeptesla (github link can be found on the main wiki page).

Selecting Data

If not correct, the path used for accessing the datasets can be changed in the params.py file:

$ data_dir = ... #Replace ... with the path to the datasets directory

This must also be done for the maximum steering angle of the car to ensure all values are accurate:

$ maxAngle = _ #Replace _ with the maximum steering angle

If either or both of these are already correct, these steps can be ignored.

The datasets that are used for training and validation can also be set in the params.py file, specifically:

 $ epochs['train'] = [...] #Replace ... with dataset numbers 
 $ epochs['val'] = [...] #Replace ... with dataset numbers

Once the data is set, the model can be trained by running:

 $ python train.py

While training two loss values, train loss and val loss, are displayed every ten training steps and show the loss from the training and validation data, respectively. After every 100 training steps, the trained model is saved in a directory whose name is determined by the save_dir variable in params.py. For example, if save_dir = os.path.abspath('models') then the model will be stored in a directory called 'models' under the deeptesla-directory folder. This folder is also used to access the model, and can be changed to test different models. To do this, simply set save_dir = os.path.abspath(<model_folder>) where <model_folder> is the name of the directory where the model was saved.

It is recommended to train the model on a system with a Nvidia GPU so that Tensorflow with GPU support is used. If GPU supported Tensorflow is not already on the system, it can be installed using pip:

 $ pip install tensorflow-gpu     

This significantly reduces the amount of time it takes to train the model. Then, after the model is trained, copy the directory the model was saved in back to the Raspberry Pi.

Model Testing

A visualization of model performance is also possible with deeptesla (Note: OpenCV 3+ is required for this).

Visualization

Similar to training the model, the specific videos that will be used to test the model must be selected first. This can be done in the run.py file by modifying the line:

 $ epoch_ids = [...] #Replace ... with dataset numbers

Once the videos have been selected, the visualization can be performed by running:

 $ python run.py

This creates a 1280x720 video copy of out-mencoder.avi, named out-mencoder2.avi, for each dataset selected. These copies are then used to create a visual representation of the model's performance. These visualizations can be found in the output/ folder.

Each visualization video has an overlay that can be used to evaluate model performance and contains the following elements:

  • A section that displays various statistics. Currently, it displays the actual and predicted steering angles, the model's error, average error and standard deviation and the time that is takes for the angle to be calculated (in ms).
    • Note: the error, average error, and standard deviation shown are all relative to the maximum steering angle of the car.
  • Two steering wheel images, one represents the actual steering angle and the other represents the angle predicted by the model. The second is also colored to show how close it is to the actual steering angle (green = close to actual, red = far from actual).
  • A graph depicting the actual and predicted steering angles.

Real World Testing

The model can also be tested in real world scenarios, but requires Tensorflow to be installed on the Raspberry Pi. One way to do this is to install it using the pip command:

 $ pip install tensorflow

If this fails, Tensorflow can also be installed using Tensorflow on Raspberry Pi and following the instructions on the README.

The model can then be tested by using the command:

 $ python controller.py

This will make the car drive autonomously at a set speed. The car can be stopped by sending an interruption signal (Ctrl-C on the keyboard).

Note: a -1 to 1 state variable that increments/decrements in eighths (-.125, 0, .125, .25, ...) is used to estimate the current angle of the car and determine if any turns should occur. A positive value represents a left turn, a negative value represents a right turn, and 0 is the center position. As such, it may be inaccurate at times which may result in decreased model performance.

Training with Dataset Aggregation

The model can be further trained using Dataset Aggregation, or DAgger for short. This is done by running the command:

 $ python dagger.py

This acts in a manner similar to controller.py, but gives the option for the user to control the steering and speed. All speed inputs from the user will be accepted, but any angle inputs may not be accepted and the car may choose the model's angle instead (the decision is dependent on a random value being greater than another value determined by the user). After the program is finished, the data from the run is stored in the datasets directory under a folder called 'dataset_dagger' which contains two files:

  • dagger.mp4: This is the video file of the run itself. It also contains the frame number and value of the state variable in the top left corner.
  • data.csv: In addition to the frame number, this also contains the model's angle, the expert's angle, the angle randomly chosen and the value of the state variable. The run can then be used for further training by renaming dataset_dagger to "dataset#", where # is an integer value, and dagger.mp4 to 'out-mencoder.avi'. Note: After each run, the contents of dataset_dagger are overwritten, so the a run's data must be saved before the next run or else it will be lost.

Examples of Car Self-Driving

The car self-driving a custom made track

Track Gif

Model: TrackModel Video Source: https://github.com/heechul/picar/tree/master/videos/Track2.mp4

The car self-driving in a hallway

Hall Gif

Model: HallModel Video Source: https://github.com/heechul/picar/tree/master/videos/Hall1.mp4

Training with the Udacity Self-Driving Car Simulator

It is also possible to train the model using data collected from the Udacity Self-Driving Car Simulator. To do this, create a new dataset in the datasets directory (preferably using the naming convention dataset#, where # is an integer value) and use it for storing the data (when prompted by Udacity, navigate to and select the folder you just created). Once the data has been collected, the new dataset should have a directory labeled 'IMG' that contains the frames of the car driving and a file driving_log.csv that contains various statistics about the car's performance. In order to use these with deeptesla, a video must be created from the images in the IMG file. First, we rename the IMG folder to 'png' and driving_log.csv to 'data.csv'. We then use video.py from Udacity's Behavioral Cloning project, but modify it to create a video compatible with deeptesla (unmodified, the created video would not work with deeptesla). From the datasets directory, a video can be created with the following command:

$ python video.py dataset#/png #Replace # with the integer value of the dataset

Note: the python module 'moviepy' is required. It can be installed using pip:

$ pip install moviepy #If this returns a permission denied error, run it again with 'sudo' at the front

This creates a video 'png.mp4' in the folder of the dataset, which should then be renamed to 'out-mencoder.avi'. The final alteration that needs to be made is in the data.csv file. In it, add a row at the top which will be used to label the columns. Namely, the fourth column (column D), should be labeled as 'angle'. The remaining cells can be labeled as the user sees fit, or can even be left blank. As this point, the data collected from Udacity should be usable by deeptesla for training purposes.

###Testing the model with the Udacity Simulator The trained model can also be tested in the Udacity simulator using the (drive.py)[https://github.com/udacity/CarND-Behavioral-Cloning-P3/blob/master/drive.py] file that is also from the Udacity Behavioral Cloning Project. The file was modified so that it would open and get the predicted angles from the tensorflow model, instead of a Keras model. To test the model, open the Udacity simulator and start it in Autonomous mode. Then, from the deeptesta-directory folder, run the command:

$ python drive.py

It should then connect to the simulator and the car should start self-driving. The model's performance can then be tested as the car drives around the track.

Challenges

Precise Angle Control

The most notable problem that was encountered in this project was the lack of precise angle control, as the car had a DC motor. As a result, we were unable to precisely control the angle of the car and had to rely on the angle estimate state variable to be accurate. This also meant that there was no way to center the wheels of the car to a 0 degree angle. As such, the wheels had to be centered manually before each run so that the state variable was correct at the beginning of each run.

Future Work

Car Hardware

As stated above, the biggest issue in this project was the lack of precise angle control due to the hardware of the car. A car with a more precise motor (stepper, servo, etc.) would be more preferred as they would allow for better control of the car and, therefore, better testing of model performance.

Real World ROS Controller

An experimental controller for testing models in the real world using ROS can be used by running the command:

$ python daggerROS.py

However, it is not currently optimal for use for a few reasons:

  • The turning of the car's wheels differs from the non-ROS commands used to turn the car. Therefore, the car may drive in an undesirable fashion.
  • Rather than getting the frame fed to the model directly from the car's camera, it gets it from the stream that starts when picar_base is launched (see the Web-based control and video streaming (w/ ROS) section on the main wiki page).

In the future, this could be rectified so that the ROS controller can drive the car as similarly as possible to the non-ROS controller.

Keras Model Testing

It is possible to train an initial Keras model, from the picar/ directory, with the command:

$ python kerasModel.py

However, there is currently no way to test the performance of the model. Simulated or real world environments could be used to not only test the model, but also compare it to a deeptesla model trained in the same environment.