Traffic Lights Detection in adverse conditions using Convolutional Neural Networks

Session

Civil Engineering, Infrastructure and Environment

Description

In modern cars, many sensors are install onboard such as lidar, radar, cameras, and sonars as parts of driver-assistance systems (DAS). Several challenging image processing and recognition problems are met in DAS; the detection of the traffic lights is among the most difficult tasks. Around the word, manufactures produce different traffic lights in shape, size, and layout. Even worse, traffic lights are install in different positions and typically are operating continously, introducing a variety of adverse environmental conditions i.e. partial occlusion, detection at night, in low vision conditions, etc. Although traffic lights have been design to be highly visible, the outdoor nature of the detection problem greatly increases the illumination and the image backgound variations.In this paper, a method for detecting the position of the traffic lights in video sequencies is presented and evaluated using a well-known video database. Our implementation include a sequence of digital image processing elements and a convolutional neural network classifier:

  1. Approximatelly the 40% of the lower part of the image is excluded from further processing, because the traffic lights are met in the upper part of the video images.
  2. A Gaussian low pass-filter eliminates the noise at high frequencies, introduced especially at night conditions from the RGB-camera raw data.
  3. A color transformation method enhances the red and green information by estimating the corresponding image in the HSV color space.
  4. Color segmentation detects the image areas related to background information.
  5. A morphological dilation operator fills the remaining regions of interest (ROI) into a set of small areas.
  6. The popular Canny edge detection algorithm is used to detect the edges in the ROIs while supress further the noise.
  7. The Circle Hough Transform (CHT) estimate triplets, position and radius, in the ROIs that are circles with high probability, completing the bulb detection method.
  8. A convolutional neural network has been used to recognize the traffic lights in the areas where the CHT detects circular patterns.

The experimental results were carried out by processing the LISA Traffic Light Dataset. The data set is freely available at Kaggle’s website, containing annotated traffic light video data of 44 minutes. Both day and night sequencies were used in both training and evaluation session. The detail presenetation of various experiments will be presented in the full paper

Keywords:

Traffic lights, road, HSV

Session Chair

Feti Selmani

Session Co-Chair

Anjeza Alaj

Proceedings Editor

Edmond Hajrizi

ISBN

978-9951-437-69-1

Location

Pristina, Kosovo

Start Date

27-10-2018 10:45 AM

End Date

27-10-2018 12:15 PM

DOI

10.33107/ubt-ic.2018.75

This document is currently not available here.

Share

COinS
 
Oct 27th, 10:45 AM Oct 27th, 12:15 PM

Traffic Lights Detection in adverse conditions using Convolutional Neural Networks

Pristina, Kosovo

In modern cars, many sensors are install onboard such as lidar, radar, cameras, and sonars as parts of driver-assistance systems (DAS). Several challenging image processing and recognition problems are met in DAS; the detection of the traffic lights is among the most difficult tasks. Around the word, manufactures produce different traffic lights in shape, size, and layout. Even worse, traffic lights are install in different positions and typically are operating continously, introducing a variety of adverse environmental conditions i.e. partial occlusion, detection at night, in low vision conditions, etc. Although traffic lights have been design to be highly visible, the outdoor nature of the detection problem greatly increases the illumination and the image backgound variations.In this paper, a method for detecting the position of the traffic lights in video sequencies is presented and evaluated using a well-known video database. Our implementation include a sequence of digital image processing elements and a convolutional neural network classifier:

  1. Approximatelly the 40% of the lower part of the image is excluded from further processing, because the traffic lights are met in the upper part of the video images.
  2. A Gaussian low pass-filter eliminates the noise at high frequencies, introduced especially at night conditions from the RGB-camera raw data.
  3. A color transformation method enhances the red and green information by estimating the corresponding image in the HSV color space.
  4. Color segmentation detects the image areas related to background information.
  5. A morphological dilation operator fills the remaining regions of interest (ROI) into a set of small areas.
  6. The popular Canny edge detection algorithm is used to detect the edges in the ROIs while supress further the noise.
  7. The Circle Hough Transform (CHT) estimate triplets, position and radius, in the ROIs that are circles with high probability, completing the bulb detection method.
  8. A convolutional neural network has been used to recognize the traffic lights in the areas where the CHT detects circular patterns.

The experimental results were carried out by processing the LISA Traffic Light Dataset. The data set is freely available at Kaggle’s website, containing annotated traffic light video data of 44 minutes. Both day and night sequencies were used in both training and evaluation session. The detail presenetation of various experiments will be presented in the full paper