Autonomous Off-Road Vehicle Using Image Segmentation & Nvidia Jetson Nano

Nitin
4 min readApr 10, 2021

In this article, I will demonstrate how to create a self-driving car using Image Segmentation, OpenCV for navigation control in an outdoor/offroad environment. You might wonder why did I choose Segmentation over other methods? Well, that's because Segmentation can easily map a road and highlight the object of interest and we can ignore the rest.

All of the processing is done on an Nvidia Jetson Nano for fast, real-time inference with a high fps count. For a computationally heavy task like this, I would suggest using the Nano instead of Arduino or RaspberryPi. If for some reason you are stuck with RaspberryPi you can port some of the code to your Computer/Laptop via the socket library.

The remaining of the article will be broken down into the following parts:

  • Hardware Components, Circuit Diagram and Assembly
  • Data Gathering and Labelling
  • Fcn-Resnet18 Training and Validation
  • Deployment and Results

HARDWARE COMPONENTS:

  • Nvidia Jetson Nano 2GB — link
  • Car Chassis — link
  • L289N-Motor Driver — link
  • Li-Ion Battery — link
  • 2 x Step Down Converters — link
  • Web camera — link
  • Mini Bread Board — link

Circuit Diagram:

Circuit Diagram

Here is what the platform looks like post assembly

DATA GATHERING & LABELLING

For Data Collection I decided to drive the car for a couple of laps around the track and use OpenCV to save video output. In total, I had about 1 hour worth of driving footage. Using the FFmpeg library I broke down the videos to still frames for every 30 seconds. I then removed all the shaky/blurred images and also removed some of the most frequent images to reduce data size. In the end, I was left with around 1000 images.

For data labelling, I decided to use “labelme” tool which allows users to label images using polygons and save the coordinates in PaSCAL VOC format as JSON files. There were a total of two classes (I)dirt_road with colour coding of (0,255,0) and (II)background with colour coding of (0,0,0). I further had to create masks for each image of their respective region of interest. And finally, divide the images and masks into train and validation separately.

Data--Training----Images
-------image_1.jpg
-------image_2.jpg
-------image_3.jpg.....
----Masks
-------mask_1.jpg
-------mask_2.jpg
-------mask_3.jpg.....
--Validation----Images
-------image_1.jpg
-------image_2.jpg
-------image_3.jpg.....
----Masks
-------mask_1.jpg
-------mask_2.jpg
-------mask_3.jpg.....

Training and Validation

For training, I decided to use the Pytorch fcn_resnet18 model which uses the ResNet-18 as the encoder. The model is pre-trained on the ImageNet dataset so the training process was quick. There are deeper models like the ResNet-34 or ResNet-50, but since Jetson Nano has limited processing power I decided to use ResNet-18.

The training environment used was Google Colab, the model was trained for 30 epochs on two classes and it gave an accuracy of 97.32% and mean-iou of about 90%. The model needs to be further converted into onnx format for the jetson-inference library.

DEPLOYMENT:

Here is a video demonstration of the robot car, it contains the demo of vehicle movement, segmentation mask and navigation system. There were certain instances when the vehicle gets stuck due to uneven terrain and limited acceleration capability. Apart from that, the vehicle platform performed as I had expected it to.

This has been a really fun experience building the robot and training the Segmentation model. I have learned a lot from this project and hope to create more complex navigation and road following platforms in the future. If you liked this article please follow me on Medium, Github and LinkedIn. I post regular content about my projects on all these platforms.

Sources:

Linked-In:https://www.linkedin.com/in/nitin-kumar-a3121018a/

Github:https://github.com/nogifeet

--

--