Skip to content
View jakemclaughlin6's full-sized avatar

Highlights

  • Pro

Block or report jakemclaughlin6

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Please don't include any personal information such as legal names or email addresses. Maximum 100 characters, markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
jakemclaughlin6/README.md

Hi there 👋

I'm Jake and I am an Autonomy Developer with OTTO Motors!

I did my Masters at the University of Waterloo focusing on tightly coupled Visual-Lidar SLAM for automated infrastructure inspections.

I'm currently working at OTTO Motors as a part of the Perception team working on robust SLAM in warehouse environments, my specialty is within calibration of the various sensors on our AMR platforms.

LinkedIn OTTO

Projects

Visual Inertial Odometry

This video was taken of the Visual-Inertial-Odometry I implemented, as part of beam_slam, from the ground up using Locus's fuse repository, along with an in lab library libbeam, where many of my contributions lie. The goal of this VIO implementation was to provide a platform for further research and enhancement (learning based feature tracking, MLPnP, semantic segmentation etc). It was also implemented with the coupling of LiDAR odometry in mind, along with coupled Visual-LiDAR place recognition for robust and accurate loop closures.

vio.mp4
Visual-Lidar Map Alignment

As part of my thesis work, I have implemented an offline tool to automatically align maps generated from SLAM. This approach to alignment allows for more robust, decoupled approached to visual or lidar place recognition as there is no real-time constraint. The purpose of this work is to allow for repeated inspections of the same area, without the need to be confined to one of the few multi-session SLAM packages (namely ORB-SLAM3, RTAB-map, maplab and lt-mapper). See my repository vl_traj_alignment for implementation.

Before Alignment After Alignment

Pinned Loading

  1. BEAMRobotics/beam_slam BEAMRobotics/beam_slam Public

    Tightly coupled lidar-visual-inertial slam using the fuse framework

    C++ 82 4

  2. vlma vlma Public

    Visual-Lidar Map Alignment

    C++ 4 1

  3. BEAMRobotics/libbeam BEAMRobotics/libbeam Public

    public library for all internal software

    C++ 19 4

  4. locusrobotics/fuse locusrobotics/fuse Public

    The fuse stack provides a general architecture for performing sensor fusion live on a robot. Some possible applications include state estimation, localization, mapping, and calibration.

    C++ 738 123