Traffic Lights Detection in adverse conditions using Convolutional Neural Networks
Session
Civil Engineering, Infrastructure and Environment
Description
In modern cars, many sensors are install onboard such as lidar, radar, cameras, and sonars as parts of driver-assistance systems (DAS). Several challenging image processing and recognition problems are met in DAS; the detection of the traffic lights is among the most difficult tasks. Around the word, manufactures produce different traffic lights in shape, size, and layout. Even worse, traffic lights are install in different positions and typically are operating continously, introducing a variety of adverse environmental conditions i.e. partial occlusion, detection at night, in low vision conditions, etc. Although traffic lights have been design to be highly visible, the outdoor nature of the detection problem greatly increases the illumination and the image backgound variations.In this paper, a method for detecting the position of the traffic lights in video sequencies is presented and evaluated using a well-known video database. Our implementation include a sequence of digital image processing elements and a convolutional neural network classifier:
- Approximatelly the 40% of the lower part of the image is excluded from further processing, because the traffic lights are met in the upper part of the video images.
- A Gaussian low pass-filter eliminates the noise at high frequencies, introduced especially at night conditions from the RGB-camera raw data.
- A color transformation method enhances the red and green information by estimating the corresponding image in the HSV color space.
- Color segmentation detects the image areas related to background information.
- A morphological dilation operator fills the remaining regions of interest (ROI) into a set of small areas.
- The popular Canny edge detection algorithm is used to detect the edges in the ROIs while supress further the noise.
- The Circle Hough Transform (CHT) estimate triplets, position and radius, in the ROIs that are circles with high probability, completing the bulb detection method.
- A convolutional neural network has been used to recognize the traffic lights in the areas where the CHT detects circular patterns.
The experimental results were carried out by processing the LISA Traffic Light Dataset. The data set is freely available at Kaggle’s website, containing annotated traffic light video data of 44 minutes. Both day and night sequencies were used in both training and evaluation session. The detail presenetation of various experiments will be presented in the full paper
Keywords:
Traffic lights, road, HSV
Session Chair
Feti Selmani
Session Co-Chair
Anjeza Alaj
Proceedings Editor
Edmond Hajrizi
ISBN
978-9951-437-69-1
Location
Pristina, Kosovo
Start Date
27-10-2018 10:45 AM
End Date
27-10-2018 12:15 PM
DOI
10.33107/ubt-ic.2018.75
Recommended Citation
Symeonidis, George; Groumpos, Peter P.; and Dermatas, Evangelos, "Traffic Lights Detection in adverse conditions using Convolutional Neural Networks" (2018). UBT International Conference. 75.
https://knowledgecenter.ubt-uni.net/conference/2018/all-events/75
Traffic Lights Detection in adverse conditions using Convolutional Neural Networks
Pristina, Kosovo
In modern cars, many sensors are install onboard such as lidar, radar, cameras, and sonars as parts of driver-assistance systems (DAS). Several challenging image processing and recognition problems are met in DAS; the detection of the traffic lights is among the most difficult tasks. Around the word, manufactures produce different traffic lights in shape, size, and layout. Even worse, traffic lights are install in different positions and typically are operating continously, introducing a variety of adverse environmental conditions i.e. partial occlusion, detection at night, in low vision conditions, etc. Although traffic lights have been design to be highly visible, the outdoor nature of the detection problem greatly increases the illumination and the image backgound variations.In this paper, a method for detecting the position of the traffic lights in video sequencies is presented and evaluated using a well-known video database. Our implementation include a sequence of digital image processing elements and a convolutional neural network classifier:
- Approximatelly the 40% of the lower part of the image is excluded from further processing, because the traffic lights are met in the upper part of the video images.
- A Gaussian low pass-filter eliminates the noise at high frequencies, introduced especially at night conditions from the RGB-camera raw data.
- A color transformation method enhances the red and green information by estimating the corresponding image in the HSV color space.
- Color segmentation detects the image areas related to background information.
- A morphological dilation operator fills the remaining regions of interest (ROI) into a set of small areas.
- The popular Canny edge detection algorithm is used to detect the edges in the ROIs while supress further the noise.
- The Circle Hough Transform (CHT) estimate triplets, position and radius, in the ROIs that are circles with high probability, completing the bulb detection method.
- A convolutional neural network has been used to recognize the traffic lights in the areas where the CHT detects circular patterns.
The experimental results were carried out by processing the LISA Traffic Light Dataset. The data set is freely available at Kaggle’s website, containing annotated traffic light video data of 44 minutes. Both day and night sequencies were used in both training and evaluation session. The detail presenetation of various experiments will be presented in the full paper