CN112149509A - Traffic signal lamp fault detection method integrating deep learning and image processing - Google Patents

Traffic signal lamp fault detection method integrating deep learning and image processing Download PDF

Info

Publication number
CN112149509A
CN112149509A CN202010865255.XA CN202010865255A CN112149509A CN 112149509 A CN112149509 A CN 112149509A CN 202010865255 A CN202010865255 A CN 202010865255A CN 112149509 A CN112149509 A CN 112149509A
Authority
CN
China
Prior art keywords
signal lamp
lamp
light
img
information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010865255.XA
Other languages
Chinese (zh)
Other versions
CN112149509B (en
Inventor
徐震辉
祁照阁
蒋栋奇
曹锋
袁旖
马建国
徐茂军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Supcon Information Technology Co ltd
Original Assignee
Zhejiang Supcon Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Supcon Information Technology Co ltd filed Critical Zhejiang Supcon Information Technology Co ltd
Priority to CN202010865255.XA priority Critical patent/CN112149509B/en
Publication of CN112149509A publication Critical patent/CN112149509A/en
Application granted granted Critical
Publication of CN112149509B publication Critical patent/CN112149509B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/097Supervising of traffic control systems, e.g. by giving an alarm if two crossing streets have green light simultaneously
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/08Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02BCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
    • Y02B20/00Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
    • Y02B20/40Control techniques providing energy savings, e.g. smart controller or presence detection

Abstract

The invention discloses a traffic signal lamp fault detection method integrating deep learning and image processing, which comprises the following steps of S1: the method comprises the steps of obtaining electronic police video streams under various weather conditions, decoding the electronic police video streams to obtain images containing traffic signal lamps, position information of traffic signal lamp groups in the images, and type information and position information of each signal lamp in the traffic signal lamp groups; step S2: configuring traffic signal lamp group information and signal lamp information in a lamp group; step S3: establishing a detection model of a traffic signal lamp; the invention effectively solves the problems of small-range camera shake and position deviation, and improves the alarm accuracy; the deep learning technology based on the convolutional neural network is adopted to realize the position and state identification of the signal lamp, and the image processing algorithm based on the signal lamp period is adopted to further improve the identification accuracy of the signal lamp.

Description

Traffic signal lamp fault detection method integrating deep learning and image processing
Technical Field
The invention relates to the technical field of traffic signal lamp state monitoring, in particular to a traffic signal lamp fault detection method integrating deep learning and image processing.
Background
The traffic signal lamp is a category of traffic safety products, and can be used for strengthening traffic management of roads and improving the use efficiency of the roads. The normal operation of the traffic signal lamp is the basis of the normal operation of the city, but a large number of existing traffic signal lamps are non-intelligent traffic signal lamps without fault self-diagnosis capability at present, and the fault detection of the traffic signal lamps is mainly realized by the modes of the repair of duty traffic police, the routing inspection of traffic facilities, the alarm of citizens and the like.
The signal lamp fault detection mode has the problems of large workload of maintenance personnel, untimely fault finding, low efficiency and the like. Aiming at the actual needs of urban road traffic signal lamp fault detection and maintenance, the current mainstream signal lamp fault automatic detection method is mainly a detection method based on video identification.
The detection method based on video identification mainly adopts a deep learning method, firstly, a large number of signal lamp pictures are obtained, a signal lamp detection model is obtained through training, secondly, signal lamps in a signal lamp area in a video are detected through the signal lamp detection model, and finally the position and the state of the signal lamps are determined. The method mainly comprises an intersection-level detection scheme and a center-level detection scheme at present. The problems of the detection method in practical use include: the number of signal lamp samples is limited, the generalization capability of the detection model is weak, and missing detection exists in the detection process, so that the detection precision is influenced, and false alarm is caused.
For example, "a method, a system, and a storage medium for detecting a failure of a traffic signal lamp" disclosed in chinese patent literature, the publication number: CN109636777A, filing date thereof: 11/20/2018, comprising the following steps: acquiring a video stream of a camera, and acquiring a single-frame image in real time; acquiring a traffic signal lamp area according to the acquired single-frame image; judging the detection environment of the traffic signal lamp according to the traffic signal lamp area; according to the judgment result, carrying out gray-scale image division on the traffic signal lamp area to obtain a single-channel gray-scale image; the single-channel gray-scale image comprises a red-channel gray-scale image, a yellow-channel gray-scale image and a green-channel gray-scale image; obtaining a red gray image, a yellow gray image and a green gray image according to the single-channel gray image; respectively carrying out image binarization processing on the red gray level image, the yellow gray level image and the green gray level image to obtain a binarized image; generating a fault detection result according to the binary image; the faults include a traffic light extinction fault and a traffic light display fault. The traffic signal lamp fault detection method mainly adopts an image processing method, the number of signal lamp samples obtained by the method is limited, the detection precision is low, and false alarm is easily caused.
Disclosure of Invention
The invention mainly solves the problems of weak generalization capability of a detection model, low model identification precision and missing detection in the detection process in the existing video detection method; the traffic signal lamp fault detection method integrating deep learning and image processing reduces the false alarm rate of signal lamp fault detection based on video and improves the alarm accuracy.
The technical problem of the invention is mainly solved by the following technical scheme: the traffic signal lamp fault detection method integrating deep learning and image processing comprises the following steps of:
step S1: the method comprises the steps of obtaining electronic police video streams under various weather conditions, decoding the electronic police video streams to obtain images containing traffic signal lamps, position information of traffic signal lamp groups in the images, and type information and position information of each signal lamp in the traffic signal lamp groups;
step S2: configuring traffic signal lamp group information and signal lamp information in a lamp group;
step S3: establishing a detection model of a traffic signal lamp;
step S4: carrying out image processing and identification on the traffic signal lamp;
step S5: counting the state of a traffic signal lamp group according to a signal lamp period;
step S6: detecting a detection area in a traffic signal lamp image by using a signal lamp detection model, and if the detection model fails to detect a signal lamp, detecting the position and the state of the signal lamp in the image by using an image processing method for the lamp group area; matching the detected position information and the detected category information of each signal lamp with the signal lamp information configured in each signal lamp group; and judging the fault of the traffic signal lamp for the signal lamp which is not successfully matched.
The meaning of signal lamp configuration information matching is as follows: the detected central coordinates of the signal lamp are positioned in the circumscribed rectangle frame corresponding to the signal lamp, and the category information is the same as that of the circumscribed rectangle of the signal lamp; the invention can acquire the position and state information of the traffic signal lamp in real time, can count the position information of the traffic signal lamp and the configuration information of the actual signal lamp group according to the period to match, corrects the type of the signal lamp, and can judge whether the traffic signal lamp has a fault according to the configuration information of the traffic signal lamp.
The method comprises the steps of training to generate a signal lamp detection model by collecting a large number of signal lamp samples, detecting position information of each signal lamp in each lamp group in an electronic police video through the detection model, and identifying the states (including red, yellow, green and countdown) of the signal lamps; when the lamp detection model cannot detect the position of the signal lamp in the image, switching to an image processing method to position the position information of the current signal lamp, and finally obtaining the type of the signal lamp according to the existing signal lamp configuration information; the signal lamp state is identified by the two methods, so that the accuracy of signal lamp identification in electronic police video stream is improved; taking the maximum period of the intersection annunciator as a statistical period, and completing the statistics of the positions and types of the single lamps in the lamp group in one period; matching the position and the lamp type information of the single lamp in the lamp group statistical information by taking the green lamp signal as a reference (the detection accuracy of the green lamp is high) according to the relative position configuration information of the single lamp in the lamp group configuration information to finish the correction of the lamp group type information; and according to the state of the lamp group detected in real time, the faults of complete turn-off, simultaneous turn-on, countdown turn-off and the like of the lamp group are judged in real time. And according to the information of red light, yellow light, green light and count-down lamp that detects in the cycle, judge whether each single lamp in the traffic signal lamp group is out of order at present, but the detectable trouble type includes: the traffic signal lamp set is completely turned off, the red lamp is turned off, the yellow lamp is turned off, the green lamp is turned off, the countdown is turned off, the red and yellow lights are simultaneously on, the red and green lights are simultaneously on, the yellow and green lights are simultaneously on and the like.
Preferably, the step S2 includes the following steps:
step S21: configuring signal lamp group information; setting a detection area and a lamp group area of a signal lamp group, and counting the type and position information of the signal lamp in the set lamp group area if the detected position of the signal lamp is in the set lamp group area;
step S22: configuring signal lamp information; and arranging signal lamp external rectangular frames, wherein the external rectangular frames of each signal lamp contain the position information and the type information of the signal lamp.
Preferably, the step S3 includes the following steps:
step S31: selecting a YOLOV3-tiny depth convolution network as a traffic signal lamp target detection network;
step S32: making a training sample of the traffic signal lamp, sending the training sample into a traffic signal lamp target detection network, and performing classification regression training of a traffic signal detection model to generate a signal lamp detection model; acquiring an electronic police video stream, acquiring traffic signal lamp images in batch from the video stream, ensuring that the size of a single traffic signal lamp in the image is more than or equal to 8 multiplied by 8 pixels, marking the position and the type of each traffic signal lamp in the image to generate a training sample, sending the training sample into a traffic signal lamp target detection network, performing regression training of a traffic signal detection model, and generating a signal lamp detection model;
step S33: the type of the individual traffic signal light is identified and the individual traffic signal light is located.
Preferably, the step S4 includes the following steps:
step S41: counting a maximum pixel image max _ img and a minimum pixel image min _ img in a signal lamp area in a period by using the periodicity of a signal lamp;
step S42: subtracting the image img of the signal lamp region in the current period from the minimum pixel image min _ img in the previous period to obtain a difference image diff _ img;
step S43: setting a proper threshold value thresh, and carrying out image binarization operation on the difference image according to the threshold value to obtain the position information of the signal lamp;
step S44: and matching the position information of the signal lamp with the position information of the configured signal lamp to obtain the category information of the signal lamp.
Preferably, the step S6 includes the following steps:
step S61: if the category information in the traffic signal lamp group configuration information has a green lamp, the following matching and fault judgment processes are carried out:
step a: counting the position information of the green light by taking a signal light period as a complete counting time;
step b: when the statistical information does not contain the position information of the green light, the abnormal alarm of the green light enters an alarm queue;
step c: when the statistical information includes the position information of the green light, the position information (x) of the circumscribed rectangle of the green light is obtained by using the signal lamp configuration informationg,ygW, h) and position information (x) of the circumscribed rectangle of the red lightr,yrW, h) calculating the distance d between the circumscribed rectangle frame of the red light and the circumscribed rectangle frame of the green light in the configuration informationx1,dx2Or dy1,dy2
For horizontal banks, dx1=|xg-xr|,dx2=|yg-yr|;
For vertical banks, dy1=|yg-yr|,dy2=|xg-xr|;
In the statistical information, there is position information (x ') of green light'g,y’gW, h) using dx1,dx2Or dy1,dy2Calculating the position of red light (x'r,y’r,w,h);
For horizontal light group, x'r=x’g-dx1,y’r=y’g-dx2
For the vertical light group, y'r=y’g-dy1,x’r=x’g-dy2
If the position information (x) of the red light in the statistical information1,y1W, h) and the calculated red light position (x'r,y’rW, h) are not matched, so that the red light abnormal alarm enters an alarm queue;
the matching conditions were as follows:
if (x'r-w/2)<x1<(x’r+w/2),|y’r-y1|<h/2 is regarded as horizontal matching;
if (y'r-h1/2)<y1<(y’r+h1/2),|x’r-x1|<w/2 is considered as vertical matching;
the horizontal lamp group needs to meet horizontal matching, and the vertical lamp group needs to meet vertical matching;
step d: when the statistical information includes the position information of the green light, the position information (x) of the circumscribed rectangle of the green light can be obtained by using the signal lamp configuration informationg,ygW, h) and position information (x) of the circumscribed rectangle of the yellow lighty,yyW, h) calculating the distance d between the circumscribed rectangular frame of the red light and the circumscribed rectangular frame of the yellow light in the configuration informationx3,dx4Or dy3,dy4
For horizontal banks, dx3=|xg-xy|,dy4=|yg-yy|;
For vertical banks, dy3=|yg-yy|,dy4=|xg-xy|;
When the green light position (x ') in the statistical information is obtained'g,y’gAfter w, h), using dx3,dx4Or dy3,dy4Calculating yellow light position (x'y,y’y,w,h);
For horizontal light group, x'y=x’g-dx3,y’y=y’g-dx4
For the vertical light group, y'y=y’g-dy3,x’y=x’g-dy4
If the position information (x) of the yellow light in the statistical information2,y2W, h) and the calculated yellow light position (x'y,y’yW, h) are not matched, so that the yellow light abnormal alarm enters an alarm queue;
the matching conditions were as follows:
if (x'y-w/2)<x2<(x’y+w/2),|y’y-y2|<h/2 is regarded as horizontal matching;
if (y'y-h/2)<y2<(y’y+h/2),|x’y-x2|<w/2 is considered as vertical matching;
the horizontal lamp group needs to meet horizontal matching, and the vertical lamp group needs to meet vertical matching;
step e: when the statistical information includes the position information of the green light, the position information (x) of the circumscribed rectangle of the green light is obtained by using the signal lamp configuration informationg,ygW, h) and position information (x) of a circumscribed rectangle of the countdown lampc,ycW, h) calculating the distance d between the external rectangular frame of the countdown lamp and the external rectangular frame of the green lamp in the configuration informationx5,dx6Or dy5,dy6
For horizontal banks, dx5=|xg-xc|,dx6=|yg-yc|;
For vertical banks, dy5=|yg-yc|,dy6=|xg-xc|;
When the green light position (x ') in the statistical information is obtained'g,y’gAfter w, h), using dx5,dx6Or dy5,dy6Calculating countdown lamp position (x'c,y’c,w,h);
For horizontal light group, x'c=x’g-dx5,y’c=y’g-dx6
For the vertical light group, y'c=y’g-dy5,y’c=y’g-dy6
If the countdown lamp position information (x) in the statistical information3,y3W, h) and the calculated countdown lamp position (x'c,y’cW, h) are not matched, so that the countdown lamp gives an abnormal alarm to enter an alarm queue;
the matching conditions were as follows:
if (x'c-w/2)<x3<(x’c+w/2),|y’c-y3|<h/2 is regarded as horizontal matching;
if (y'c-h1/2)<y3<(y’c+h1/2),|x’c-x3|<w/2 is considered as vertical matching;
if the green light, the red light, the yellow light and the countdown light are matched, deleting the alarm queue, completing the matching and enabling the traffic signal light to be normal;
step S62: if the category information in the traffic signal lamp group configuration information does not have a green lamp, the following matching and fault judgment processes are carried out: the statistical information of the positions and states of the signal lamps detected in the signal period indicates that the positions of a group of lamps can be completely matched with the configuration information of the traffic signal lamp group, and the group of signal lamp groups is normal; otherwise, the abnormal judgment of the signal lamp group is made.
Preferably, in step S33, the method for identifying the type of the single traffic signal lamp is as follows: converting a real-time electronic police video stream into RGB image data, sending the RGB image data into a signal lamp detection model, and detecting an external rectangular frame of a traffic signal lamp and the type of the identified signal lamp in a current RGB image by the signal lamp detection model.
Preferably, the method for acquiring the maximum pixel image max _ img and the minimum pixel image min _ img in step S41 includes: starting from a current signal period T1, taking a lamp group region image img _1 in a first frame RGB image as a reference, comparing a lamp group region image img (T) in each subsequent frame RGB image with the first frame image, wherein the value range of T is [0, T ], and different T represents lamp group region images img (T) corresponding to different moments;
setting the minimum pixel image of the lamp group area as min _ img,
the maximum pixel image of the lamp group area is max _ img;
order: min _ img ═ img (0)
max_img=img(0)
Within a signal period T, when
min _ img [ i, j ] > img (t) [ i, j ], then min _ img [ i, j ] ═ img (t) [ i, j ];
max [ i, j ] < img (t) [ i, j ], img _ max [ i, j ] ═ img (t) [ i, j ];
wherein [ i, j ] is the coordinate of a pixel in the image, and the value range of T is [0, T ];
when T is T, the minimum pixel image min _ img and the maximum pixel image max _ img within the period T1 are obtained.
Preferably, the method for acquiring the difference image diff _ img in step S42 includes: an image in the lamp group region of each frame of RGB image in the current period T2 is img (T), and a difference image diff _ img (T) ═ img (T) -min _ img is obtained by a subtraction operation from the minimum pixel image min _ img in the lamp group region obtained in the previous period T1.
Preferably, the method for acquiring the position information of the signal lamp in step S43 includes: from diff _ img (t) obtained in step S42, a corresponding threshold thresh is set
If the difference image diff _ img (t) [ i, j ] > is thresh,
diff _ img (t) [ i, j ] ═ 255;
whereas the difference image diff _ img (t) [ i, j ] < thresh,
diff _ img (i, j) ═ 0;
obtaining a binary image through the difference image, and carrying out contour detection and minimum circumscribed rectangle acquisition on a white pixel block in the binary image; setting the center of the obtained minimum circumscribed rectangle as (xi, yi), the length as hi and the width as wi;
in the information of the signal lamp of the configuration device,
the position information of the circumscribed rectangle of the red light is (xr, yr, w, h);
the position information of the circumscribed rectangle of the green light is (xg, yg, w, h);
the position information of the circumscribed rectangle of the yellow light is (xy, yy, w, h);
the position information of the circumscribed rectangle of the countdown lamp is (xc, yc, w, h);
if xr-w/2< xi < xr + w/2, yr-h/2< yi < yr + h/2,
the signal light identification result in the figure is a red light;
if xg-w/2< xi < xg + w/2, yg-h/2< yi < yg + h/2,
the signal lamp identification result in the figure is a green lamp;
if xy-w/2< xi < xy + w/2, yy-h/2< yi < yy + h/2,
the signal lamp identification result in the figure is a yellow lamp;
if xc-w/2< xi < xc + w/2, yc-h/2< yi < yc + h/2, the signal lamp identification result in the figure is a countdown lamp;
the threshold thresh is calculated as follows:
thresh=mean+α*dev,
wherein mean is the mean value of diff _ img (t), dev is the standard deviation of diff _ img (t), alpha is a coefficient, and the value range of alpha is between [0,4 ].
Preferably, the method for acquiring the state of one traffic light group in the signal light cycle in step S5 includes: the traffic signal lamp group comprises a red lamp, a yellow lamp, a green lamp and a countdown lamp; the red light represents no passage, the green light represents permission of passage, and the yellow light represents warning;
counting red lights, green lights, yellow lights and countdown lights which appear in a signal lamp period according to a complete signal lamp period duration of a current intersection as a reference;
if the red light counted in the signal lamp period is not over, the red light is out of order;
if the counted green light in the signal lamp period is not over, the green light is out of order;
if the yellow lamp counted in the signal lamp period is not over, the yellow lamp is out of order;
if the count-down lamp in the signal lamp period is not over, the count-down lamp goes out of order; if the red light, the green light, the yellow light and the countdown light counted in the signal lamp period are not lighted, the failure of full light-off exists;
if the counted time T of simultaneous lighting of the red light and the yellow light in the signal lamp period is greater than T, and T is a set time threshold, judging that the red light and the yellow light have the same lighting fault;
if the counted time T of simultaneous lighting of the red light and the green light in the signal lamp period is greater than T, judging the fault that the red light and the green light are on simultaneously;
if the counted time T of simultaneous turning on of the yellow light and the green light in the signal lamp period is greater than T, judging that the yellow light and the green light have the same turning-on fault;
the traffic signal lamp statistics principle is as follows: recording a first signal lamp entering a statistical queue;
and if d is > (h + w)/4, newly adding the position information of the signal lamp A into the queue.
The invention has the beneficial effects that: (1) the problems of small-range camera shaking and position deviation are effectively solved, the false alarm rate of signal lamp fault detection based on video is reduced, and the alarm accuracy is improved; (2) the position and state identification of the signal lamp is realized by adopting a deep learning technology based on a convolutional neural network, and the identification accuracy of the signal lamp is further improved by adopting an image processing algorithm based on the signal lamp period; (3) the method and the system have the advantages that signal lamp position matching and type calibration are carried out according to signal lamp position information detected in a signal period of the intersection traffic signal machine, faults of the traffic signal lamp can be detected and diagnosed, and traffic facility maintenance personnel can find the faults of the traffic signal lamp in time and maintain the traffic signal lamp in time.
Drawings
Fig. 1 is a flow chart of a traffic signal lamp fault detection method with deep learning and image processing integrated according to a first embodiment.
FIG. 2 is a diagram of configuration information of a traffic signal light set according to the first embodiment
Fig. 3 is a configuration information diagram of a traffic signal lamp according to the first embodiment.
Fig. 4 is a schematic diagram of image processing of the traffic signal lamp according to the first embodiment.
Fig. 5 is a schematic diagram of statistical information of the traffic signal lamp set according to the first embodiment.
Fig. 6 is a flow chart of alarm of traffic signal lamp set matching according to the first embodiment.
Detailed Description
The technical scheme of the invention is further specifically described by the following embodiments and the accompanying drawings.
The first embodiment is as follows: a traffic signal lamp fault detection method integrating deep learning and image processing is disclosed, as shown in FIG. 1, and comprises the following steps:
(1-1) obtaining an electronic police video stream under the conditions of sunny days, cloudy days and rainy days, decoding the electronic police video stream to obtain image information of a traffic signal lamp, position information of a traffic signal lamp set in the image, and type information and position information of each signal lamp in the traffic signal lamp set;
(1-2) configuring traffic signal lamp group information and signal lamp information in the lamp group:
(1-2-1) configuration signal light group information:
setting a detection area and a lamp group area of a signal lamp group, and counting the type and position information of the signal lamp in the set lamp group area if the detected position of the signal lamp is in the set lamp group area;
as shown in fig. 2, the area set when the marker of the present invention arranges the traffic signal light group is relatively large in consideration of the jitter and offset of the live video camera. The detection area of the left-turn arrow lamp is shown as rectangular box 1. In the video image recognition, the recognized signal lamp center is in the rectangular frame 1, and the type and coordinate information of the signal lamp are counted in the left turn signal lamp group 1. In the same way, the detection area of the straight disc lamp is shown as a rectangular frame 2. In the video image recognition, the recognized signal lamp center is in the rectangular frame 2, and the type and coordinate information of the signal lamp are counted in the straight signal lamp group 2. It can be seen that there is an overlapping region between the signal light group wire frames, which may cause the signal light information to have statistical information in both the light group wire frame 1 and the light group wire frame 2, and the present invention corrects the redundant information by matching.
(1-2-2) configuration Signal light information
Set up the external rectangle frame of signal lamp, every external rectangle frame of signal lamp contains the positional information and the type information of signal lamp, and wherein positional information includes two parts: (1) the center coordinates (x, y) of the signal in the traffic signal image; (2) the length h and the width w of the circumscribed rectangle;
as shown in FIG. 3, the invention adds the types and the position information of a red light, a yellow light, a green light and a countdown light in sequence in a rectangular frame 1 of the signal light group. Fig. 3 shows the general configuration information included in a lamp group, including the type, number, and coordinate range (rectangular frame 1) information of the lamp group; type and coordinate information of each lamp in the lamp group;
(1-3) identification of traffic Signal lights
(1-3-1) selecting a YOLOV3-tiny deep convolution network as a traffic light target detection network;
the detection range of the size of the signal lamp is determined to be 8-40-8 by counting the pixel-level sizes of the signal lamp in various intersection scenes, and 6 types of anchors are determined by clustering. The dark net19 adopted by YOLOV3-tiny is pooled through 5 layers of maximum values, so that a large number of low-layer neural networks in the picture can extract simple features for identification, and the positioning accuracy and the identification accuracy of small targets such as signal lamps are poor. Through training and testing of a large number of test samples, the scanning step length of Maxpool on the 1 st layer is changed from 2 to 1, and the receptive field of the detected layer is increased; meanwhile, the number of shallow convolution kernels is increased, and the positioning accuracy and the identification accuracy of signal lamp detection are improved.
(1-3-2) making a training sample of the traffic signal lamp, sending the training sample into a traffic signal lamp target detection network, performing regression training on a traffic signal detection model, and generating a signal lamp detection model;
acquiring an electronic police video stream, acquiring traffic signal lamp images in batch from the video stream, ensuring that the size of a single traffic signal lamp in the image is more than or equal to 8 multiplied by 8 pixels, marking the position and the type of each traffic signal lamp in the image to generate a training sample, sending the training sample into a traffic signal lamp target detection network, performing regression training of a traffic signal detection model, and generating a signal lamp detection model;
(1-3-3) identifying the type of the individual traffic signal lamp and locating the individual traffic signal lamp;
converting a real-time electronic police video stream into RGB image data, sending the RGB image data into a signal lamp detection model, and detecting an external rectangular frame of a traffic signal lamp and a signal lamp type obtained by identification in a current RGB image by the signal lamp detection model;
(1-4) image processing recognition of traffic Signal
(1-4-1) comparing and counting the pixel value of each position of a lamp group area in a signal period by using the periodicity of a signal lamp to obtain a maximum pixel image max _ img and a minimum pixel image min _ img in the signal lamp area, wherein as shown in fig. 4, the minimum pixel image records image information when the signal lamp is turned off, and the maximum pixel image records image information when the signal lamp is turned on;
(1-4-2) carrying out subtraction operation on the signal lamp area image img in the current period and the minimum pixel image min _ img in the previous period to obtain a difference image diff _ img;
(1-4-3) in each lamp group image, the part with the most severe change is the position of a lamp, and the rest positions are changed more slowly, so that a proper threshold value thresh is set, a threshold value is set for a difference value image, and image binarization operation is carried out, so that the position pixels with severe change in the image are reserved, and all the pixels with mild change are set to be 0;
(1-5) counting the state of a traffic signal lamp group according to the signal lamp period;
the traffic signal lamp group comprises a red lamp, a yellow lamp, a green lamp and a countdown lamp; the red light represents no passage, the green light represents permission of passage, and the yellow light represents warning;
counting red lights, green lights, yellow lights and countdown lights which appear in a signal lamp period according to a complete signal lamp period duration of a current intersection as a reference;
if the red light counted in the signal lamp period is not over, the red light is out of order;
if the counted green light in the signal lamp period is not over, the green light is out of order;
if the yellow lamp counted in the signal lamp period is not over, the yellow lamp is out of order;
if the count-down lamp in the signal lamp period is not over, the count-down lamp goes out of order; if the red light, the green light, the yellow light and the countdown light counted in the signal lamp period are not lighted, the failure of full light-off exists;
if the counted time T of simultaneous lighting of the red light and the yellow light in the signal lamp period is greater than T, and T is a set time threshold, judging that the red light and the yellow light have the same lighting fault;
if the counted time T of simultaneous lighting of the red light and the green light in the signal lamp period is greater than T, judging the fault that the red light and the green light are on simultaneously;
and if the counted time T > T that the yellow light and the green light are simultaneously on in the signal lamp period, judging that the yellow light and the green light are simultaneously on.
As shown in fig. 5, the rectangular frames normally detected by the traffic signal detection model are projected onto the two-dimensional plane and arranged according to the image positions of the signal lights; and the statistical center distribution formed after the positions of the traffic signal lamps pass through periodic statistics meets normal distribution.
The traffic signal lamp statistics principle is as follows:
recording a first signal lamp entering a statistical queue;
and if d is > (h + w)/4, newly adding the position information of the signal lamp A into the queue.
As shown in fig. 6, in the whole matching process, it can be determined that the traffic light is normal as long as a group of traffic light statistical results are completely matched with the configuration information.
Determining matching reference points in the traffic signal configuration information:
in the traffic signal lamp detection, through a large amount of video detection verification, the green light detection error rate of the traffic signal lamp is found to be very low, and the green light can be used as a matched reference point as long as the green light is detected. If the traffic signal configuration information itself does not contain a green light, then the entire group of lights need to be fully matched to be correct.
(1-6) detecting a detection area in a traffic signal lamp image by using a signal lamp detection model, and matching the detected position information and category information of each signal lamp with signal lamp configuration information in each signal lamp group (namely, the detected center coordinates of the signal lamps are positioned in a signal lamp external rectangular frame, and the category information is the same as the category information of the signal lamp external rectangular frame); and judging the fault of the traffic signal lamp for the signal lamp which is not successfully matched.
(1-6-1) if the category information in the traffic signal lamp group configuration information has a green lamp, entering the following matching and fault judgment process:
(1-6-1-1) counting the position information of the green light by taking one signal light period as a complete counting time;
(1-6-1-2) when the position information of the green light does not exist in the statistical information, alarming to enter an alarm queue when the green light is abnormal;
(1-6-1-3) when the position information of the green light exists in the statistical information, acquiring the circumscribed rectangle of the green light by using the signal light configuration informationPosition information (x) ofg,ygW, h) and position information (x) of the circumscribed rectangle of the red lightr,yrW, h) calculating the distance d between the circumscribed rectangle frame of the red light and the circumscribed rectangle frame of the green light in the configuration informationx1,dx2Or dy1,dy2
For horizontal banks, dx1=|xg-xr|,dx2=|yg-yr|;
For vertical banks, dy1=|yg-yr|,dy2=|xg-xr|;
In the statistical information, there is position information (x ') of green light'g,y’gW, h) using dx1,dx2Or dy1,dy2Calculating the position of red light (x'r,y’r,w,h);
For horizontal light group, x'r=x’g-dx1,y’r=y’g-dx2
For the vertical light group, y'r=y’g-dy1,x’r=x’g-dy2
If the position information (x) of the red light in the statistical information1,y1W, h) and the calculated red light position (x'r,y’rW, h) are not matched, so that the red light abnormal alarm enters an alarm queue;
the matching conditions were as follows:
if (x'r-w/2)<x1<(x’r+w/2),|y’r-y1|<h/2 is regarded as horizontal matching;
if (y'r-h1/2)<y1<(y’r+h1/2),|x’r-x1|<w/2 is considered as vertical matching;
the horizontal lamp group needs to meet horizontal matching, and the vertical lamp group needs to meet vertical matching;
(1-5-1-4) when the position information of the green light exists in the statistical information, utilizing the signal light configuration informationThe position information (x) of the circumscribed rectangle of the green light can be obtainedg,ygW, h) and position information (x) of the circumscribed rectangle of the yellow lighty,yyW, h) calculating the distance d between the circumscribed rectangular frame of the red light and the circumscribed rectangular frame of the yellow light in the configuration informationx3,dx4Or dy3,dy4
For horizontal banks, dx3=|xg-xy|,dy4=|yg-yy|;
For vertical banks, dy3=|yg-yy|,dy4=|xg-xy|;
When the green light position (x ') in the statistical information is obtained'g,y’gAfter w, h), using dx3,dx4Or dy3,dy4Calculating yellow light position (x'y,y’y,w,h);
For horizontal light group, x'y=x’g-dx3,y’y=y’g-dx4
For the vertical light group, y'y=y’g-dy3,x’y=x’g-dy4
If the position information (x) of the yellow light in the statistical information2,y2W, h) and the calculated yellow light position (x'y,y’yW, h) are not matched, so that the yellow light abnormal alarm enters an alarm queue;
the matching conditions were as follows:
if (x'y-w/2)<x2<(x’y+w/2),|y’y-y2|<h/2 is regarded as horizontal matching;
if (y'y-h/2)<y2<(y’y+h/2),|x’y-x2|<w/2 is considered as vertical matching;
the horizontal lamp group needs to meet horizontal matching, and the vertical lamp group needs to meet vertical matching;
(1-6-1-5) when there is position information of a green light among the statistical information, using the signal light configurationInformation, obtaining the position information (x) of the circumscribed rectangle of the green lightg,ygW, h) and position information (x) of a circumscribed rectangle of the countdown lampc,ycW, h) calculating the distance d between the external rectangular frame of the countdown lamp and the external rectangular frame of the green lamp in the configuration informationx5,dx6Or dy5,dy6
For horizontal banks, dx5=|xg-xc|,dx6=|yg-yc|;
For vertical banks, dy5=|yg-yc|,dy6=|xg-xc|,;
When the green light position (x ') in the statistical information is obtained'g,y’gAfter w, h), using dx5,dx6Or dy5,dy6Calculating countdown lamp position (x'c,y’c,w,h);
For horizontal light group, x'c=x’g-dx5,y’c=y’g-dx6
For the vertical light group, y'c=y’g-dy5,y’c=y’g-dy6
If the countdown lamp position information (x) in the statistical information3,y3W, h) and the calculated countdown lamp position (x'c,y’cW, h) are not matched, so that the countdown lamp gives an abnormal alarm to enter an alarm queue;
the matching conditions were as follows:
if (x'c-w/2)<x3<(x’c+w/2),|y’c-y3|<h/2 is regarded as horizontal matching;
if (y'c-h1/2)<y3<(y’c+h1/2),|x’c-x3|<w/2 is considered as vertical matching;
if the green light, the red light, the yellow light and the countdown light are matched, deleting the alarm queue, completing the matching and enabling the traffic signal light to be normal;
(1-6-2) if the category information in the traffic signal lamp group configuration information does not have a green lamp, entering the following matching and fault judgment process:
the statistical information of the positions and states of the signal lamps detected in the signal period indicates that the positions of a group of lamps can be completely matched with the configuration information of the traffic signal lamp group, and the group of signal lamp groups is normal; otherwise, the abnormal judgment of the signal lamp group is made.
The above-described embodiments are only preferred embodiments of the present invention, and are not intended to limit the present invention in any way, and other variations and modifications may be made without departing from the spirit of the invention as set forth in the claims.

Claims (10)

1. The traffic signal lamp fault detection method integrating deep learning and image processing is characterized by comprising the following steps of:
step S1: the method comprises the steps of obtaining electronic police video streams under various weather conditions, decoding the electronic police video streams to obtain images containing traffic signal lamps, position information of traffic signal lamp groups in the images, and type information and position information of each signal lamp in the traffic signal lamp groups;
step S2: configuring traffic signal lamp group information and signal lamp information in a lamp group;
step S3: establishing a detection model of a traffic signal lamp;
step S4: carrying out image processing and identification on the traffic signal lamp;
step S5: counting the state of a traffic signal lamp group according to a signal lamp period;
step S6: detecting a detection area in a traffic signal lamp image by using a signal lamp detection model, and if the detection model fails to detect a signal lamp, detecting the position and the state of the signal lamp in the image by using an image processing method for the lamp group area; matching the detected position information and the detected category information of each signal lamp with the signal lamp information configured in each signal lamp group; and judging the fault of the traffic signal lamp for the signal lamp which is not successfully matched.
2. The deep learning and image processing integrated traffic signal fault detection method according to claim 1, wherein the step S2 comprises the steps of:
step S21: configuring signal lamp group information; setting a detection area and a lamp group area of a signal lamp group, and counting the type and position information of the signal lamp in the set lamp group area if the detected position of the signal lamp is in the set lamp group area;
step S22: configuring signal lamp information; and arranging signal lamp external rectangular frames, wherein the external rectangular frames of each signal lamp contain the position information and the type information of the signal lamp.
3. The deep learning and image processing fused traffic signal lamp fault detection method according to claim 1 or 2, wherein the step S3 comprises the steps of:
step S31: selecting a YOLOV3-tiny depth convolution network as a traffic signal lamp target detection network;
step S32: making a training sample of the traffic signal lamp, sending the training sample into a traffic signal lamp target detection network, and performing classification regression training of a traffic signal detection model to generate a signal lamp detection model;
step S33: the type of the individual traffic signal light is identified and the individual traffic signal light is located.
4. The deep learning and image processing fused traffic signal lamp fault detection method according to claim 1 or 2, wherein the step S4 comprises the steps of:
step S41: counting a maximum pixel image max _ img and a minimum pixel image min _ img in a signal lamp area in a period by using the periodicity of a signal lamp;
step S42: subtracting the image img of the signal lamp region in the current period from the minimum pixel image min _ img in the previous period to obtain a difference image diff _ img;
step S43: setting a proper threshold value thresh, and carrying out image binarization operation on the difference image according to the threshold value to obtain the position information of the signal lamp;
step S44: and matching the position information of the signal lamp with the position information of the configured signal lamp to obtain the category information of the signal lamp.
5. The deep learning and image processing integrated traffic signal fault detection method according to claim 1, wherein the step S6 comprises the steps of:
step S61: if the category information in the traffic signal lamp group configuration information has a green lamp, the following matching and fault judgment processes are carried out:
step a: counting the position information of the green light by taking a signal light period as a complete counting time;
step b: when the statistical information does not contain the position information of the green light, the abnormal alarm of the green light enters an alarm queue;
step c: when the statistical information includes the position information of the green light, the position information (x) of the circumscribed rectangle of the green light is obtained by using the signal lamp configuration informationg,ygW, h) and position information (x) of the circumscribed rectangle of the red lightr,yrW, h) calculating the distance d between the circumscribed rectangle frame of the red light and the circumscribed rectangle frame of the green light in the configuration informationx1,dx2Or dy1,dy2
For horizontal banks, dx1=|xg-xr|,dx2=|yg-yr|;
For vertical banks, dy1=|yg-yr|,dy2=|xg-xr|;
In the statistical information, there is position information (x ') of green light'g,y’gW, h) using dx1,dx2Or dy1,dy2Calculating the position of red light (x'r,y’r,w,h);
For horizontal light group, x'r=x’g-dx1,y’r=y’g-dx2
For the vertical light group, y'r=y’g-dy1,x’r=x’g-dy2
If the position information (x) of the red light in the statistical information1,y1W, h) and the calculated red light position (x'r,y’rW, h) are not matched, so that the red light abnormal alarm enters an alarm queue;
the matching conditions were as follows:
if (x'r-w/2)<x1<(x’r+w/2),|y’r-y1|<h/2 is regarded as horizontal matching;
if (y'r-h1/2)<y1<(y’r+h1/2),|x’r-x1|<w/2 is considered as vertical matching;
the horizontal lamp group needs to meet horizontal matching, and the vertical lamp group needs to meet vertical matching;
step d: when the statistical information includes the position information of the green light, the position information (x) of the circumscribed rectangle of the green light can be obtained by using the signal lamp configuration informationg,ygW, h) and position information (x) of the circumscribed rectangle of the yellow lighty,yyW, h) calculating the distance d between the circumscribed rectangular frame of the red light and the circumscribed rectangular frame of the yellow light in the configuration informationx3,dx4Or dy3,dy4
For horizontal banks, dx3=|xg-xy|,dy4=|yg-yy|;
For vertical banks, dy3=|yg-yy|,dy4=|xg-xy|;
When the green light position (x ') in the statistical information is obtained'g,y’gAfter w, h), using dx3,dx4Or dy3,dy4Calculating yellow light position (x'y,y’y,w,h);
For horizontal light group, x'y=x’g-dx3,y’y=y’g-dx4
For the vertical light group, y'y=y’g-dy3,x’y=x’g-dy4
If the position information (x) of the yellow light in the statistical information2,y2W, h) and the calculated yellow light position (x'y,y’yW, h) are not matched, so that the yellow light abnormal alarm enters an alarm queue;
the matching conditions were as follows:
if (x'y-w/2)<x2<(x’y+w/2),|y’y-y2|<h/2 is regarded as horizontal matching;
if (y'y-h/2)<y2<(y’y+h/2),|x’y-x2|<w/2 is considered as vertical matching;
the horizontal lamp group needs to meet horizontal matching, and the vertical lamp group needs to meet vertical matching;
step e: when the statistical information includes the position information of the green light, the position information (x) of the circumscribed rectangle of the green light is obtained by using the signal lamp configuration informationg,ygW, h) and position information (x) of a circumscribed rectangle of the countdown lampc,ycW, h) calculating the distance d between the external rectangular frame of the countdown lamp and the external rectangular frame of the green lamp in the configuration informationx5,dx6Or dy5,dy6
For horizontal banks, dx5=|xg-xc|,dx6=|yg-yc|;
For vertical banks, dy5=|yg-yc|,dy6=|xg-xc|;
When the green light position (x ') in the statistical information is obtained'g,y’gAfter w, h), using dx5,dx6Or dy5,dy6Calculating countdown lamp position (x'c,y’c,w,h);
For horizontal light group, x'c=x’g-dx5,y’c=y’g-dx6
For the vertical light group, y'c=y’g-dy5,y’c=y’g-dy6
If the countdown lamp position information (x) in the statistical information3,y3W, h) and the calculated countdown lamp position (x'c,y’cW, h) are not matched, so that the countdown lamp gives an abnormal alarm to enter an alarm queue;
the matching conditions were as follows:
if (x'c-w/2)<x3<(x’c+w/2),|y’c-y3|<h/2 is regarded as horizontal matching;
if (y'c-h1/2)<y3<(y’c+h1/2),|x’c-x3|<w/2 is considered as vertical matching;
if the green light, the red light, the yellow light and the countdown light are matched, deleting the alarm queue, completing the matching and enabling the traffic signal light to be normal;
step S62: if the category information in the traffic signal lamp group configuration information does not have a green lamp, the following matching and fault judgment processes are carried out:
the statistical information of the positions and states of the signal lamps detected in the signal period indicates that the positions of a group of lamps can be completely matched with the configuration information of the traffic signal lamp group, and the group of signal lamp groups is normal; otherwise, the abnormal judgment of the signal lamp group is made.
6. The deep learning and image processing integrated traffic signal fault detection method according to claim 3, wherein the method for identifying the type of the single traffic signal in step S33 is as follows: converting a real-time electronic police video stream into RGB image data, sending the RGB image data into a signal lamp detection model, and detecting an external rectangular frame of a traffic signal lamp and the type of the identified signal lamp in a current RGB image by the signal lamp detection model.
7. The deep learning and image processing integrated traffic signal lamp fault detection method as claimed in claim 4, wherein the maximum pixel image max _ img and the minimum pixel image min _ img in the step S41 are obtained by: starting from a current signal period T1, taking a lamp group region image img _1 in a first frame RGB image as a reference, comparing a lamp group region image img (T) in each subsequent frame RGB image with the first frame image, wherein the value range of T is [0, T ], and different T represents lamp group region images img (T) corresponding to different moments;
setting the minimum pixel image of the lamp group area as min _ img,
the maximum pixel image of the lamp group area is max _ img;
order: min _ img ═ img (0)
max_img=img(0)
Within a signal period T, when
min _ img [ i, j ] > img (t) [ i, j ], then min _ img [ i, j ] ═ img (t) [ i, j ];
max [ i, j ] < img (t) [ i, j ], img _ max [ i, j ] ═ img (t) [ i, j ];
wherein [ i, j ] is the coordinate of a pixel in the image, and the value range of T is [0, T ];
when T is T, the minimum pixel image min _ img and the maximum pixel image max _ img within the period T1 are obtained.
8. The deep learning and image processing combined traffic signal lamp fault detection method as claimed in claim 4, wherein the obtaining method of the difference image diff _ img in the step S42 is as follows: an image in the lamp group region of each frame of RGB image in the current period T2 is img (T), and a difference image diff _ img (T) ═ img (T) -min _ img is obtained by a subtraction operation from the minimum pixel image min _ img in the lamp group region obtained in the previous period T1.
9. The traffic signal lamp fault detection method integrating deep learning and image processing as claimed in claim 4, wherein the method for acquiring the position information of the signal lamp in step S43 is as follows: from diff _ img (t) obtained in step S42, a corresponding threshold thresh is set
If the difference image diff _ img (t) [ i, j ] > is thresh,
diff _ img (t) [ i, j ] ═ 255;
whereas the difference image diff _ img (t) [ i, j ] < thresh,
diff _ img (i, j) ═ 0;
obtaining a binary image through the difference image, and carrying out contour detection and minimum circumscribed rectangle acquisition on a white pixel block in the binary image; setting the center of the obtained minimum circumscribed rectangle as (xi, yi), the length as hi and the width as wi;
in the information of the signal lamp of the configuration device,
the position information of the circumscribed rectangle of the red light is (xr, yr, w, h);
the position information of the circumscribed rectangle of the green light is (xg, yg, w, h);
the position information of the circumscribed rectangle of the yellow light is (xy, yy, w, h);
the position information of the circumscribed rectangle of the countdown lamp is (xc, yc, w, h);
if xr-w/2< xi < xr + w/2, yr-h/2< yi < yr + h/2,
the signal light identification result in the figure is a red light;
if xg-w/2< xi < xg + w/2, yg-h/2< yi < yg + h/2,
the signal lamp identification result in the figure is a green lamp;
if xy-w/2< xi < xy + w/2, yy-h/2< yi < yy + h/2,
the signal lamp identification result in the figure is a yellow lamp;
if xc-w/2< xi < xc + w/2, yc-h/2< yi < yc + h/2, the signal lamp identification result in the figure is a countdown lamp;
the threshold thresh is calculated as follows:
thresh=mean+α*dev,
wherein mean is the mean value of diff _ img (t), dev is the standard deviation of diff _ img (t), alpha is a coefficient, and the value range of alpha is between [0,4 ].
10. The traffic signal lamp fault detection method integrating deep learning and image processing as claimed in claim 1 or 2, wherein the method for acquiring the state of one traffic signal lamp group in a signal lamp period in step S5 comprises: the traffic signal lamp group comprises a red lamp, a yellow lamp, a green lamp and a countdown lamp; the red light represents no passage, the green light represents permission of passage, and the yellow light represents warning; counting red lights, green lights, yellow lights and countdown lights which appear in a signal lamp period according to a complete signal lamp period duration of a current intersection as a reference;
if the red light counted in the signal lamp period is not over, the red light is out of order;
if the counted green light in the signal lamp period is not over, the green light is out of order;
if the yellow lamp counted in the signal lamp period is not over, the yellow lamp is out of order;
if the count-down lamp in the signal lamp period is not over, the count-down lamp goes out of order; if the red light, the green light, the yellow light and the countdown light counted in the signal lamp period are not lighted, the failure of full light-off exists;
if the counted time T of simultaneous lighting of the red light and the yellow light in the signal lamp period is greater than T, and T is a set time threshold, judging that the red light and the yellow light have the same lighting fault;
if the counted time T of simultaneous lighting of the red light and the green light in the signal lamp period is greater than T, judging the fault that the red light and the green light are on simultaneously;
and if the counted time T > T that the yellow light and the green light are simultaneously on in the signal lamp period, judging that the yellow light and the green light are simultaneously on.
CN202010865255.XA 2020-08-25 2020-08-25 Traffic signal lamp fault detection method integrating deep learning and image processing Active CN112149509B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010865255.XA CN112149509B (en) 2020-08-25 2020-08-25 Traffic signal lamp fault detection method integrating deep learning and image processing

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010865255.XA CN112149509B (en) 2020-08-25 2020-08-25 Traffic signal lamp fault detection method integrating deep learning and image processing

Publications (2)

Publication Number Publication Date
CN112149509A true CN112149509A (en) 2020-12-29
CN112149509B CN112149509B (en) 2023-05-09

Family

ID=73888944

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010865255.XA Active CN112149509B (en) 2020-08-25 2020-08-25 Traffic signal lamp fault detection method integrating deep learning and image processing

Country Status (1)

Country Link
CN (1) CN112149509B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129591A (en) * 2021-04-13 2021-07-16 江苏智通交通科技有限公司 Traffic signal lamp fault detection method based on deep learning target detection
CN113194589A (en) * 2021-04-22 2021-07-30 九州云(北京)科技发展有限公司 Airport navigation aid light single lamp fault monitoring method based on video analysis
CN114821451A (en) * 2022-06-28 2022-07-29 南开大学 Offline target detection method and system for traffic signal lamp video
CN117475411A (en) * 2023-12-27 2024-01-30 安徽蔚来智驾科技有限公司 Signal lamp countdown identification method, computer readable storage medium and intelligent device

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6320981B1 (en) * 1997-08-28 2001-11-20 Fuji Xerox Co., Ltd. Image processing system and image processing method
US20040001624A1 (en) * 2002-07-01 2004-01-01 Xerox Corporation Separation system for Multiple Raster Content (MRC) representation of documents
CN101727573A (en) * 2008-10-13 2010-06-09 汉王科技股份有限公司 Method and device for estimating crowd density in video image
CN103886344A (en) * 2014-04-14 2014-06-25 西安科技大学 Image type fire flame identification method
CN109636777A (en) * 2018-11-20 2019-04-16 广州方纬智慧大脑研究开发有限公司 A kind of fault detection method of traffic lights, system and storage medium
CN111275696A (en) * 2020-02-10 2020-06-12 腾讯科技(深圳)有限公司 Medical image processing method, image processing method and device
CN111428647A (en) * 2020-03-25 2020-07-17 浙江浙大中控信息技术有限公司 Traffic signal lamp fault detection method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6320981B1 (en) * 1997-08-28 2001-11-20 Fuji Xerox Co., Ltd. Image processing system and image processing method
US20040001624A1 (en) * 2002-07-01 2004-01-01 Xerox Corporation Separation system for Multiple Raster Content (MRC) representation of documents
CN101727573A (en) * 2008-10-13 2010-06-09 汉王科技股份有限公司 Method and device for estimating crowd density in video image
CN103886344A (en) * 2014-04-14 2014-06-25 西安科技大学 Image type fire flame identification method
CN109636777A (en) * 2018-11-20 2019-04-16 广州方纬智慧大脑研究开发有限公司 A kind of fault detection method of traffic lights, system and storage medium
CN111275696A (en) * 2020-02-10 2020-06-12 腾讯科技(深圳)有限公司 Medical image processing method, image processing method and device
CN111428647A (en) * 2020-03-25 2020-07-17 浙江浙大中控信息技术有限公司 Traffic signal lamp fault detection method

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113129591A (en) * 2021-04-13 2021-07-16 江苏智通交通科技有限公司 Traffic signal lamp fault detection method based on deep learning target detection
CN113194589A (en) * 2021-04-22 2021-07-30 九州云(北京)科技发展有限公司 Airport navigation aid light single lamp fault monitoring method based on video analysis
CN113194589B (en) * 2021-04-22 2021-12-28 九州云(北京)科技发展有限公司 Airport navigation aid light single lamp fault monitoring method based on video analysis
CN114821451A (en) * 2022-06-28 2022-07-29 南开大学 Offline target detection method and system for traffic signal lamp video
CN117475411A (en) * 2023-12-27 2024-01-30 安徽蔚来智驾科技有限公司 Signal lamp countdown identification method, computer readable storage medium and intelligent device
CN117475411B (en) * 2023-12-27 2024-03-26 安徽蔚来智驾科技有限公司 Signal lamp countdown identification method, computer readable storage medium and intelligent device

Also Published As

Publication number Publication date
CN112149509B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN112149509B (en) Traffic signal lamp fault detection method integrating deep learning and image processing
CN111428647B (en) Traffic signal lamp fault detection method
CN110197589B (en) Deep learning-based red light violation detection method
CN109636777A (en) A kind of fault detection method of traffic lights, system and storage medium
CN106504580A (en) A kind of method for detecting parking stalls and device
CN111967498A (en) Night target detection and tracking method based on millimeter wave radar and vision fusion
KR102120854B1 (en) Apparatus for learning-based led electric signboard self-diagnosis bad dot detection and method thereof
US20180247136A1 (en) Video data background tracking and subtraction with multiple layers of stationary foreground and regions
CN112084892B (en) Road abnormal event detection management device and method thereof
CN110991221A (en) Dynamic traffic red light running identification method based on deep learning
CN113903008A (en) Ramp exit vehicle violation identification method based on deep learning and trajectory tracking
CN111753612A (en) Method and device for detecting sprinkled object and storage medium
CN103324957A (en) Identification method and identification device of state of signal lamps
CN111339834B (en) Method for identifying vehicle driving direction, computer device and storage medium
CN112528944A (en) Image identification method and device, electronic equipment and storage medium
CN111291722A (en) Vehicle weight recognition system based on V2I technology
Rachman et al. Camera Self-Calibration: Deep Learning from Driving Scenes
WO2016022020A1 (en) System and method for detecting and reporting location of unilluminated streetlights
KR102178202B1 (en) Method and apparatus for detecting traffic light
CN105740841A (en) Method and device for determining vehicle detection mode
JP3842952B2 (en) Traffic flow measuring device
CN113989774A (en) Traffic light detection method and device, vehicle and readable storage medium
CN107976319B (en) Intelligent detection system and method for car additionally provided with pedal
CN113076821A (en) Event detection method and device
CN112863194A (en) Image processing method, device, terminal and medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 310053 23-25, 2 building, 352 BINKANG Road, Binjiang District, Hangzhou, Zhejiang.

Applicant after: Zhejiang zhongkong Information Industry Co.,Ltd.

Address before: 310053 23-25, 2 building, 352 BINKANG Road, Binjiang District, Hangzhou, Zhejiang.

Applicant before: ZHEJIANG SUPCON INFORMATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant