Skip to content

This repository lists my findings of the project "Landmark Detection and Tracking (SLAM)" of Udacity´s Computer Vision Nanodegree.

License

Notifications You must be signed in to change notification settings

mikethwolff/Computer-Vision-Landmark-Detection-Tracking-SLAM

Repository files navigation

Computer Vision - Landmark Detection & Robot Tracking (SLAM)

This repository lists my findings of one project of Udacity´s Computer Vision Nanodegree.

Project Overview

In this project, I implemented SLAM (Simultaneous Localization and Mapping) for a 2 dimensional world. I combined what we know about robot sensor measurements and movement to create a map of an environment from only sensor and motion data gathered by a robot, over time. SLAM gives a way to track the location of a robot in the world in real-time and identify the locations of landmarks such as buildings, trees, rocks, and other world features. This is an active area of research in the fields of robotics and autonomous systems.

Below is an example of a 2D robot world with landmarks (purple x's) and the robot (a red 'o') located and found using only sensor and motion data collected by that robot.

The project is broken up into three Python notebooks; the first two are for exploration of provided code, and a review of SLAM architectures:

Notebook 1 : 1. Robot Moving and Sensing.ipynb

Notebook 2 : 2. Omega and Xi, Constraints.ipynb

Notebook 3 : 3. Landmark Detection and Tracking.ipynb

Project Instructions

All of the starting code and resources needed to compete this project are in this Github repository. We'll have to make sure that all the libraries and dependencies required to support this project are installed. I have already created a cv-nd environment for exercise code, and can use that environment! Instructions for creation and activation are below.

Local Environment Instructions

  1. Clone the repository, and navigate to the downloaded folder.
git clone https://github.com/udacity/P3_Implement_SLAM.git
cd P3_Implement_SLAM
  1. Create (and activate) a new environment, named cv-nd with Python 3.6. If prompted to proceed with the install (Proceed [y]/n) type y.

    • Linux or Mac:
    conda create -n cv-nd python=3.6
    source activate cv-nd
    
    • Windows:
    conda create --name cv-nd python=3.6
    activate cv-nd
    

    At this point your command line should look something like: (cv-nd) <User>:P3_Implement_SLAM <user>$. The (cv-nd) indicates that your environment has been activated, and you can proceed with further package installations.

  2. Install a few required pip packages, which are specified in the requirements text file (including OpenCV).

pip install -r requirements.txt

| Test your implementation of slam. | There are two provided test_data cases, test your implementation of slam on them and see if the result matches.|

LICENSE: This project is licensed under the terms of the MIT license.

About

This repository lists my findings of the project "Landmark Detection and Tracking (SLAM)" of Udacity´s Computer Vision Nanodegree.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published