Recreating Nvidia’s PilotNet in PerceptiLabs

Recreating Nvidia’s PilotNet in PerceptiLabs

Computer Vision is a key technology for building algorithms to enable self-driving cars. One of the pioneering projects in this field was an experimental system called PilotNet by Nvidia. It uses a deep neural network (DNN) that takes image frames from cameras mounted on the front of a car and determines the trajectory (steering angle) that is to be applied to the steering wheel.

PilotNet's architecture is composed of the layers shown in Figure 1:

Figure 1: Conceptual Image Illustrating the Layers of the PilotNet Model – Credits: NVIDIA.

In a nutshell, input in the form of images from the cameras are transformed using a series of convolution layers to extract features. Fully connected layers are then used to output a single angle that the model believes the car's steering wheel should be turned in order to successfully navigate. Of course if you were to try and build this model with conventional tools (e.g., in TensorFlow) it would be difficult to visualize this architecture. This is where the PerceptiLabs visual modeling tool really shines, as it allows you to see the model as you build it.

With research into self-driving cars accelerating (pun intended) we thought why not recreate the PilotNet model in PerceptiLabs to show just how easy it is to build. Then to prove the point, we decided to do it in front of a live audience! Here’s what happened:

To train this model we used sample data from Udacity's car simulator as the input, which we pre-processed (normalized) using Google Colab. The data consists of frames taken from three cameras mounted on the front of the vehicle which are collectively used to train the model on navigation:

Figure 2: Example Frames Representing the Left, Center, and Right Cameras on a car.

The resulting PilotNet model looks as follows in PerceptiLabs:

Figure 3: Screenshot of the PilotNet Model in PerceptiLabs.

As our model and the video above show, you can quickly and more easily create models suitable for self-driving automotive applications in the PerceptiLabs visual modeling tool. Furthermore, you can visually inspect the resulting feature maps from the different convolution layers in the model, and watch as the model develops navigational intelligence during training.

To learn more about this model and to play around with it in PerceptiLabs, check out our Nvidia-PilotNet repo on GitHub.