I worked on this project for my internship at Waterloo Autonomous Vehicles Laboratory (WAVE) Lab. I was helping a Master’s student with implementing his research results on lane detection, transferring MATLAB code to C++ for real-time testing. The lane detection pipeline uses a stereo camera to improve results. The pipeline is as follows:
use stereo camera to remove any parts of the image not on the ground plane
Convert camera image into bird’s eye view (BEV)
use a Neural Network to look at a patch of the image and classify each pixel as the lane marking or not
Extract line segments out of the image
Using RANSAC, generate lane marking hypothesis by fitting cubic splines to the line segments, null hypothesis is also added to indicate no lane marking
Evaluate each hypothesis using Bayesian probability on various metrics
The best result is chosen to be correct
At the end of the project, we also went to Detroit and demoed the results live on the road. The video below shows the results on the Waterloo dataset.