Overview

Today, robots are our primary way to study other planets, explore the deep ocean, ensure our e-shopping arrives on time, increase the efficiency and safety of our cities, and further increase agricultural productivity to feed the increasing population. Robotics is the discipline concerned with both building robots, mechanically and electrically, as well as designing and coding effective algorithms for their operation.

A robot is an intelligent computer system that can use sensors and act on the world. We will study the definitional problems in robotics through theory and practical coding solutions. The emphasis is on algorithms, probabilistic reasoning, optimization, inference mechanisms, and behavior strategies, as opposed to electromechanical systems design. This course aims to help students improve their probabilistic modeling skills and instill a few of the core concepts in robotics:

  • robotic algorithms require our best tools in computing as well as our best knowledge of the physical world; the union of these two fields is a powerful and beautiful place.
  • a robot that explicitly accounts for its uncertainty works better than a robot that does not.
  • the intuition for many robotic algorithms comes from our understanding of our own physical ability, and that development of robotic solutions can help us better understand this aspect of intelligence.

Teaching Staff

Instructor
David Meger
dmeger@X
Office: McConnell 112N
Office Hours: Tuesday 4-5pm
Teaching Assistant
Stefan Wapnick
swapnick@X
Office hours at: Trottier room 3090
Thursdays 1:30-2:30pm
Teaching Assistant
Joshua Holla
joshua@X
Office hours at: Trottier room 3090
Mondays 11am-noon
X = cim.mcgill.ca

Course Communication

Course Description

This course will broadly cover the following areas:

  • Geometric Motion Planning: how can a robot plan a sequence of motions that bring it to a goal safely? How do we represent a robot's pose in the world and the shape of its environment?
  • Mapping: how can we combine noisy measurements from sensors with the robot’s pose to build a map of the environment?
  • State Estimation: the state of the robot is not always directly measurable/observable. How can we determine the relative weighs of multiple sensor measurements in order to form an accurate estimate of the (hidden) state?
  • Feedback Control and Planning: how can we compute the state-(in)dependent commands that can bring a robotic system from its current state to a desired state?
  • Research and enrichment topics: how do robots see using cameras? How can a robot build models of its own dynamics from data and use these to perform tasks?How do robots interact with humans today, and how do we want them to interact with us in the future?

Schedule

Lecture Date Topics Tutorial Slides Readings
1 Sep 3 Introduction
Motivation, logistics, rough description of assignments, sense-plan-act paradigm.
This website acts as the course outline. Quiz 0 (Introduction, Background, Expectations)
pdf
pptx
2 Sep 5 Introduction by Example: Kinematics, Maps and Plans
Examples: Dubins car navigation through a city, robot arms loading boxes in a warehouse. Holonomic vs. non-holonomic systems.
pdf
pptx
3 Sep 10 Introduction by Example: Sensing, Estimation, Dynamics and Control
Examples: quadrotors, cartpole, humanoid balancing. Dynamics equations. Feedback linearization.
pdf
pptx
4 Sep 12 Intro to Planning
Representing a plan. Completeness and correctness of planners. Dikstra's Algorithm and A-star.
pdf
pptx
5 Sep 17 Sample-based Planning
RRT and PRM. Probabilistic convergence. Optimality analysis.
pdf
pptx
6 Sep 19 Introduction to Mapping
Map representations. Sensor alignment and data association.
Quiz 1
ROS Tutorial 8:30-10:30am @ TR3120 pdf
pptx
7 Sep 24 Occupancy Grids
Introducing Bayesian updates. Log-odds representation. Sensor noise modeling.
pdf
pptx
8 Sep 26 Least Squares Estimation
Least squares and Gaussian MLE duality. Formulating robot localization with least squares. Solving large least squares problems. Sparsity and efficiency.
Probability Refresher 8:30-10:30am @ TR3120 (slides) pdf
pptx
9 Oct 1 Graph SLAM
Expectation and Covariance. Geometric interpretation of the covariance matrix. Nonlinear Least Squares formulation of the Simultaneous Localization and Mapping (SLAM) problem.
Quiz 2
pdf
pptx
10 Oct 3 Kalman Filter #1
Derivation of the Bayesian filter and outlining the goals of Kalman filtering.
Linear Algebra refresher Friday Oct 4 3:30-5:00pm @ TR3120 pdf
pptx
11 Oct 8 Kalman Filter #2
(continuing with the previous slides). Derivation of the Kalman filter in 1D. Intuitions and outcomes for motion and measurements. Note that the material up to slide 50 on the Lecture10-11 deck was the last one that will be included for Midterm 1.
Kalman Filter Tutorial Wednesday Oct 9 4:00-6:00pm @ TR3120
- Oct 10 Midterm #1 Review Session
Practice Midterm #1
MT1 Oct 15 Midterm #1
In class. Written format. Covering all previous topics, up to Lecture 11.
Midterm #1
12 Oct 17 Extended Kalman Filter
Linearization, example applications to real sensors and robots.
pdf
pptx
13 Oct 22 Particle Filtering #1
Representing multimodal distributions. Particle propagation and resampling. Pathologies of particle filter.
pdf
pptx
14 Oct 24 Particle Filter #2
Importance Sampling. Examples: Markov localization in a known map. FastSLAM.
Quiz 3
(same slides as previous)
15 Oct 29 Feedback Control Introduction
Properties of a controlled dynamical system. Controllability, stability, under-actuation. PID control.
pdf
pptx
16 Oct 31 Optimal Control
Episodic and discounted formulations. KKT conditions. Constrained optimization. Pontryagin's maximum principle. Hamilton-Jacobi-Bellman Equation.
pdf
pptx
17 Nov 5 Linear Quadratic Regulators
Computing optimal actions for linear dynamical systems with quadratic cost-to-go functions.
pdf
pptx
18 Nov 7 Case Study: Control a Walking Humanoid
Build-up using developed concepts on progressive examples: pendulum, cartpole, single hopper, biped.
Quiz 4
(same slides as previous)
- Nov 12 Midterm #2 Review Session
Practice Midterm #2
MT2 Nov 14 Midterm #2
Covers all previous material in the course: up to Lecture 18.
19 Nov 19 Reinforcement Learning for Robots
Research highlights (quiz material only). Model-free and model-based RL. Policy gradients.
pdf
pptx
20 Nov 21 Canceled
Dave wasn't well this day, so we canceled lecture.
20 Nov 26 Revised EKF and Visual SLAM
Material to prepare you for the make-up EKF question, and the planned visual SLAM material.>
pdf
pptx
22 Nov 28 Wrap-up
Autonomous robot operation versus robots that collaborate in human spaces. Imitation learning, behavior cloning, inverse optimal control.
Quiz 5
pdf
pptx
- Dec 3 No lecture today
This Tuesday follows a Monday pattern, because McGill. Good luck on Final Exam studying!

Assignments

Marking scheme

Recommended, but optional, textbooks

Related courses

Pre-requisites

Language policy

In accordance with McGill University’s Charter of Students’ Rights, students in this course have the right to submit any written work that is to be graded in English or in French.

Academic Integrity

Students are encouraged to discuss assignment solutions with each other, however, everyone must write up their own solution. Submissions that are found to be identical will be penalized. Plagiarism is very easy to spot. It is also not worth it. Do your own work. McGill University values academic integrity. Therefore, all students must understand the meaning and consequences of cheating, plagiarism and other academic offences under the Code of Student Conduct and Disciplinary Procedures (see www.mcgill.ca/integrity for more information).

Diversity and Inclusion

This course is meant to be equally accessible to all students who have attained the pre-requisite knowledge. My goal is to provide the material in a fair and accessible fashion. I welcome any suggestions for how to do this better by email, or anonymously, on SurveyMonkey.