CN112149509B - Traffic signal lamp fault detection method integrating deep learning and image processing - Google Patents
Traffic signal lamp fault detection method integrating deep learning and image processing Download PDFInfo
- Publication number
- CN112149509B CN112149509B CN202010865255.XA CN202010865255A CN112149509B CN 112149509 B CN112149509 B CN 112149509B CN 202010865255 A CN202010865255 A CN 202010865255A CN 112149509 B CN112149509 B CN 112149509B
- Authority
- CN
- China
- Prior art keywords
- lamp
- signal lamp
- img
- information
- light
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2415—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/097—Supervising of traffic control systems, e.g. by giving an alarm if two crossing streets have green light simultaneously
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/08—Feature extraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F2218/00—Aspects of pattern recognition specially adapted for signal processing
- G06F2218/12—Classification; Matching
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02B—CLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO BUILDINGS, e.g. HOUSING, HOUSE APPLIANCES OR RELATED END-USER APPLICATIONS
- Y02B20/00—Energy efficient lighting technologies, e.g. halogen lamps or gas discharge lamps
- Y02B20/40—Control techniques providing energy savings, e.g. smart controller or presence detection
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Multimedia (AREA)
- Traffic Control Systems (AREA)
Abstract
The invention discloses a traffic signal lamp fault detection method integrating deep learning and image processing, which comprises the following steps of S1: obtaining electronic police video streams under various weather conditions, and decoding the electronic police video streams to obtain information comprising traffic signal lamp images, the position of a traffic signal lamp group in the images, the type information and the position information of each signal lamp in the traffic signal lamp group; step S2: configuring traffic signal lamp group information and signal lamp information in the lamp group; step S3: establishing a detection model of the traffic signal lamp; the invention effectively solves the problems of small-range camera shake and position deviation, and improves the alarm accuracy; the position and state identification of the signal lamp is realized by adopting a deep learning technology based on a convolutional neural network, and the identification accuracy of the signal lamp is further improved by adopting an image processing algorithm based on the signal lamp period.
Description
Technical Field
The invention relates to the technical field of traffic signal lamp state monitoring, in particular to a traffic signal lamp fault detection method integrating deep learning and image processing.
Background
The traffic signal lamp is one category of traffic safety products, can be used for enhancing traffic management of roads and improving the use efficiency of the roads. The normal operation of the traffic signal lamp is the basis of the normal operation of the city, but a large number of existing traffic signal lamps are non-intelligent traffic signal lamps without fault self-diagnosis capability, and the fault detection of the traffic signal lamp is mainly realized by the modes of repairing on duty traffic police, inspecting traffic facilities, alarming citizens and the like.
The signal lamp fault detection mode has the problems of large workload of maintenance personnel, untimely fault discovery, low efficiency and the like. Aiming at the actual requirements of urban road traffic signal lamp fault detection and maintenance, the current mainstream signal lamp fault automatic detection method is mainly a detection method based on video identification.
The detection method based on video identification mainly adopts a deep learning method, firstly acquires a large number of signal lamp pictures, trains to obtain a signal lamp detection model, secondly detects signal lamps in a signal lamp area in video through the signal lamp detection model, and finally determines the position and state of the signal lamp. The method mainly comprises an intersection-level detection scheme and a center-level detection scheme at present. Problems with this detection method in practical use include: the signal lamp sample number is limited, the generalization capability of the detection model is weak, and detection omission exists in the detection process, so that the detection precision is affected, and false alarm is caused.
For example, a "failure detection method, system and storage medium for traffic signal lamp" disclosed in chinese patent literature, its bulletin number: CN109636777a, filing date: 11.20.2018, comprising the steps of: acquiring a video stream of a camera, and acquiring a single-frame image in real time; acquiring a traffic signal lamp area according to the acquired single frame image; judging the detection environment of the traffic signal lamp according to the traffic signal lamp area; dividing the gray level map of the traffic signal lamp area according to the judging result to obtain a single-channel gray level map; the single-channel gray scale map comprises a red channel gray scale map, a yellow channel gray scale map and a green channel gray scale map; obtaining a red gray image, a yellow gray image and a green gray image according to the single-channel gray image; respectively carrying out image binarization processing on the red gray image, the yellow gray image and the green gray image to obtain a binarized image; generating a fault detection result according to the binarized image; the faults include a traffic light extinction fault and a traffic light display fault. The traffic signal lamp fault detection method mainly adopts an image processing method, the number of signal lamp samples obtained by the method is limited, the detection precision is low, and false alarm is easy to cause.
Disclosure of Invention
The method mainly solves the problems of weak generalization capability of the detection model, low model identification precision and missed detection in the detection process in the existing video detection method; the traffic signal lamp fault detection method integrating deep learning and image processing reduces the false alarm rate of signal lamp faults based on video detection and improves the alarm accuracy.
The technical problems of the invention are mainly solved by the following technical proposal: the traffic signal lamp fault detection method integrating deep learning and image processing comprises the following steps:
step S1: obtaining electronic police video streams under various weather conditions, and decoding the electronic police video streams to obtain information comprising traffic signal lamp images, the position of a traffic signal lamp group in the images, the type information and the position information of each signal lamp in the traffic signal lamp group;
step S2: configuring traffic signal lamp group information and signal lamp information in the lamp group;
step S3: establishing a detection model of the traffic signal lamp;
step S4: carrying out image processing and identification on the traffic signal lamp;
step S5: counting the state of a traffic signal lamp group according to the signal lamp period;
step S6: detecting a detection area in a traffic signal lamp image by using a signal lamp detection model, and if the detection model fails to detect a signal lamp, detecting the position and the state of the signal lamp in the image by adopting an image processing method for the lamp group area; matching the detected position information and category information of each signal lamp with signal lamp information configured in each signal lamp group; and (5) judging the faults of the traffic signal lamps for the signal lamps which are not successfully matched.
The meaning of signal lamp configuration information matching is: the detected central coordinates of the signal lamp are positioned in the circumscribed rectangle frame corresponding to the signal lamp, and the category information is the same as the category information of the circumscribed rectangle of the signal lamp; the invention can acquire the position and state information of the traffic signal lamp in real time, can match the position information of the traffic signal lamp with the configuration information of the actual signal lamp group according to the period statistics, corrects the type of the traffic signal lamp, and can judge whether the traffic signal lamp has faults according to the configuration information of the traffic signal lamp.
According to the invention, a large number of signal lamp samples are collected, a signal lamp detection model is trained and generated, the position information of each signal lamp in each lamp group in an electronic police video is detected through the detection model, and the states (including red lamps, yellow lamps, green lamps and countdown) of the signal lamps are identified; when the lamp detection model cannot detect the position of the signal lamp in the image, switching to an image processing method, positioning the position information of the current signal lamp, and finally obtaining the type of the signal lamp according to the existing signal lamp configuration information; the signal lamp states are identified by the two methods, so that the accuracy of identifying the signal lamp in the electronic police video stream is improved; taking the maximum period of the intersection annunciator as a statistics period, and completing statistics of the positions and types of single lamps in a lamp group in one period; according to the relative position configuration information of the single lamp in the lamp group configuration information, the position and lamp type information of the single lamp in the lamp group statistical information are matched by taking a green lamp signal as a reference (the detection accuracy of the green lamp is high), and the correction of the lamp group type information is completed; and judging faults such as complete extinction, same-lighting, countdown extinction and the like of the lamp set in real time according to the lamp set state detected in real time. Judging whether each single lamp in the current traffic signal lamp group is faulty according to the information of the red lamp, the yellow lamp, the green lamp and the countdown lamp detected in the period, wherein the detectable fault types comprise: the traffic signal lamp group is completely turned off, red light is turned off, yellow light is turned off, green light is turned off, countdown is turned off, red Huang Tongliang, red and green are simultaneously turned on, yellow and green are simultaneously turned on, and the like.
Preferably, the step S2 includes the steps of:
step S21: configuring signal lamp group information; setting a detection area and a lamp group area of a signal lamp group, and if the detected signal lamp position is in the set lamp group area, counting the type and position information of the signal lamp in the set lamp group area;
step S22: configuring signal lamp information; and setting signal lamp external rectangular frames, wherein each signal lamp external rectangular frame contains position information and type information of the signal lamp.
Preferably, the step S3 includes the steps of:
step S31: selecting a YOLOV3-tiny deep convolution network as a traffic signal lamp target detection network;
step S32: manufacturing a training sample of the traffic signal lamp, sending the training sample into a traffic signal lamp target detection network, and performing classification regression training of a traffic signal detection model to generate the signal lamp detection model; acquiring an electronic police video stream, acquiring traffic signal lamp images in batches from the video stream, ensuring that the size of a single traffic signal lamp in the image is more than or equal to 8 multiplied by 8 pixels, marking the position and the type of each traffic signal lamp in the image, generating a training sample, sending the training sample into a traffic signal lamp target detection network, carrying out regression training of a traffic signal detection model, and generating a signal lamp detection model;
step S33: the type of individual traffic signal light is identified and the individual traffic signal light is located.
Preferably, the step S4 includes the steps of:
step S41: counting the maximum pixel image max_img and the minimum pixel image min_img in a signal lamp area in one period by utilizing the periodicity of the signal lamp;
step S42: subtracting the minimum pixel image min_img in the previous period from the signal lamp area image img in the current period to obtain a difference image diff_img;
step S43: setting a proper threshold value thresh, and performing image binarization operation on the difference image according to the threshold value to obtain the position information of the signal lamp;
step S44: and matching the position information of the signal lamp with the configured position information of the signal lamp to obtain the category information of the signal lamp.
Preferably, the step S6 includes the steps of:
step S61: if the category information in the traffic signal lamp group configuration information has green light, entering the following matching and fault judging process:
step a: taking a signal lamp period as a complete statistical time, and counting the position information of the green lamp;
step b: when the statistical information does not contain green light position information, the green light abnormal alarm enters an alarm queue;
step c: when the statistical information includes the position information of the green light, the signal lamp configuration information is used to obtain the position information (x g ,y g W, h) and positional information (x) of circumscribed rectangle of red light r ,y r Calculating the distance d between the circumscribed rectangular frame of the red light and the circumscribed rectangular frame of the green light in the configuration information x1 ,d x2 Or d y1 ,d y2 ;
For horizontal lamp group d x1 =|x g -x r |,d x2 =|y g -y r |;
For a vertical lamp group d y1 =|y g -y r |,d y2 =|x g -x r |;
In the statistical information, there is the position information (x' g ,y’ g W, h) using d x1 ,d x2 Or d y1 ,d y2 Calculating the red light position (x' r ,y’ r ,w,h);
For horizontal lamp sets, x' r =x’ g -d x1 ,y’ r =y’ g -d x2 ;
For vertical lamp sets, y' r =y’ g -d y1 ,x’ r =x’ g -d y2 ;
If the red light position information (x) 1 ,y 1 W, h) and the calculated red light position(x’ r ,y’ r W, h) are not matched, so that the abnormal alarm of the red light enters an alarm queue;
the matching conditions are as follows:
if (x' r -w/2)<x 1 <(x’ r +w/2),|y’ r -y 1 |<h/2 is considered as horizontal matching;
if (y' r -h 1 /2)<y 1 <(y’ r +h 1 /2),|x’ r -x 1 |<w/2 is considered to be a vertical match;
the horizontal lamp group needs to meet the requirement of horizontal matching, and the vertical lamp group needs to meet the requirement of vertical matching;
step d: when the statistical information contains the position information of the green light, the position information (x) of the circumscribed rectangle of the green light can be obtained by utilizing the signal lamp configuration information g ,y g W, h) and position information (x) of the circumscribed rectangle of the yellow light y ,y y Calculating the distance d between the circumscribed rectangular frame of the red light and the circumscribed rectangular frame of the yellow light in the configuration information x3 ,d x4 Or d y3 ,d y4 ;
For horizontal lamp group d x3 =|x g -x y |,d y4 =|y g -y y |;
For a vertical lamp group d y3 =|y g -y y |,d y4 =|x g -x y |;
When the green light position (x 'in the statistical information is obtained' g ,y’ g After w, h) using d x3 ,d x4 Or d y3, d y4 Calculate the yellow light position (x' y ,y’ y ,w,h);
For horizontal lamp sets, x' y =x’ g -d x3 ,y’ y =y’ g -d x4 ;
For vertical lamp sets, y' y =y’ g -d y3 ,x’ y =x’ g -d y4 ;
If the yellow light position information (x) 2 ,y 2 ,w,h) And the calculated yellow light position (x' y ,y’ y W, h) are not matched, so that abnormal alarm of the yellow lamp enters an alarm queue;
the matching conditions are as follows:
if (x' y -w/2)<x 2 <(x’ y +w/2),|y’ y -y 2 |<h/2 is considered as horizontal matching;
if (y' y -h/2)<y 2 <(y’ y +h/2),|x’ y -x 2 |<w/2 is considered to be a vertical match;
the horizontal lamp group needs to meet the requirement of horizontal matching, and the vertical lamp group needs to meet the requirement of vertical matching;
step e: when the statistical information includes the position information of the green light, the signal lamp configuration information is used to obtain the position information (x g ,y g W, h) and position information (x) of circumscribed rectangle of countdown lamp c ,y c Calculating the distance d between the circumscribed rectangular frame of the countdown lamp and the circumscribed rectangular frame of the green lamp in the configuration information x5 ,d x6 Or d y5 ,d y6 ;
For horizontal lamp group d x5 =|x g -x c |,d x6 =|y g -y c |;
For a vertical lamp group d y5 =|y g -y c |,d y6 =|x g -x c |;
When the green light position (x 'in the statistical information is obtained' g ,y’ g After w, h) using d x5 ,d x6 Or d y5 ,d y6 Calculating the position (x 'of the countdown lamp' c ,y’ c ,w,h);
For horizontal lamp sets, x' c =x’ g -d x5 ,y’ c =y’ g -d x6 ;
For vertical lamp sets, y' c =y’ g -d y5 ,y’ c =y’ g -d y6 ;
If the countdown light position information (x) 3 ,y 3 W, h) and the calculated countdown lamp position (x' c ,y’ c W, h) are not matched, so that the countdown lamp alarms abnormally and enters an alarm queue;
the matching conditions are as follows:
if (x' c -w/2)<x 3 <(x’ c +w/2),|y’ c -y 3 |<h/2 is considered as horizontal matching;
if (y' c -h 1 /2)<y 3 <(y’ c +h 1 /2),|x’ c -x 3 |<w/2 is considered to be a vertical match;
if the green light, the red light, the yellow light and the countdown light are all matched, deleting the alarm queue, and if the matching is completed, the traffic signal lamp is normal;
step S62: if the category information in the traffic signal lamp group configuration information does not have a green light, entering the following matching and fault judging process: the statistical information of the position and the state of the signal lamp detected in the signal period, wherein the position of a group of lamps can be completely matched with the configuration information of the traffic signal lamp group, and the group of signal lamp groups are normal; otherwise, making a judgment of abnormality of the signal lamp group.
Preferably, the method for identifying the type of the single traffic signal lamp in the step S33 is as follows: and converting the real-time electronic police video stream into RGB image data, sending the RGB image data into a signal lamp detection model, and detecting an external rectangular frame of a traffic signal lamp in the current RGB image and the type of the signal lamp obtained by recognition by the signal lamp detection model.
Preferably, the method for acquiring the maximum pixel image max_img and the minimum pixel image min_img in step S41 is as follows: starting from the current signal period T1, each signal period time is T, and the lamp group area image img (T) in each subsequent frame of RGB image is compared with the first frame of image by taking the lamp group area image img_1 in the first frame of RGB image as a reference, wherein the value range of T is [0, T ], and different T represents the lamp group area images img (T) corresponding to different moments;
let the minimum pixel image of the lamp group area be min img,
the maximum pixel image of the lamp group area is max_img;
and (3) making: min_img=img (0)
max_img=img(0)
In a signal period T, when
min_img [ i, j ] > img (t) [ i, j ] min_img [ i, j ] = img (t) [ i, j ];
max [ i, j ] < img (t) [ i, j ] img_max [ i, j ] =img (t) [ i, j ];
wherein [ i, j ] is the coordinates of a pixel in the image, and the value range of t is [0, T ];
when t=t, the minimum pixel image min_img and the maximum pixel image max_img in the T1 period are obtained.
Preferably, the method for acquiring the difference image diff_img in step S42 is as follows: the image in the light group area of each frame of RGB image in the current period T2 is img (T), the light group area minimum pixel image min_img obtained in the previous period T1 is obtained through subtraction operation, and a difference image diff_img (T) =img (T) -min_img is obtained.
Preferably, the method for acquiring the position information of the signal lamp in step S43 is as follows: diff_img (t) obtained from step S42, a corresponding threshold value thresh is set
If the difference image diff _ img (t) [ i, j ] > = thresh,
diff_img (t) [ i, j ] =255;
whereas the difference image diff _ img (t) [ i, j ] < thresh,
diff_img (i, j) =0;
obtaining a binary image through the difference image, and carrying out contour detection and minimum circumscribed rectangle acquisition on a white pixel block in the binary image; setting the center of the obtained minimum circumscribed rectangle as (xi, yi), the length as hi and the width as wi;
in the information of the signal lamp which is arranged and configured,
the position information of the circumscribed rectangle of the red light is (xr, yr, w, h);
the position information of the circumscribed rectangle of the green light is (xg, yg, w, h);
the position information of the circumscribed rectangle of the yellow lamp is (xy, yy, w, h);
the position information of the circumscribed rectangle of the countdown lamp is (xc, yc, w, h);
if xr-w/2< xi < xr+w/2, yr-h/2< yi < yr+h/2,
the signal lamp identification result in the figure is a red lamp;
if xg-w/2< xi < xg+w/2, yg-h/2< yi < yg+h/2,
in the figure, the signal lamp identification result is green light;
if xy-w/2< xi < xy+w/2, yy-h/2< yi < yy+h/2,
the signal lamp identification result in the figure is yellow lamp;
if xc-w/2< xi < xc+w/2, yc-h/2< yi < yc+h/2, the signal lamp identification result in the figure is a countdown lamp;
the threshold thresh is calculated as follows:
thresh=mean+α*dev,
wherein mean is the mean value of diff_img (t), dev is the standard deviation of diff_img (t), alpha is a coefficient, and the value range of alpha is between [0,4 ].
Preferably, in the step S5, the method for acquiring the state of one traffic signal light group in the signal light period includes: the traffic signal lamp group comprises a red lamp, a yellow lamp, a green lamp and a countdown lamp; the red light indicates no traffic, the green light indicates no traffic, and the yellow light indicates warning;
counting red lights, green lights, yellow lights and countdown lights which appear in a signal lamp period according to the period length of a complete signal lamp period of the current intersection as a reference;
if the counted red light in the signal lamp period is not on, the red light is in failure;
if the counted green light in the signal lamp period is not on, the green light is in failure;
if the yellow lamp counted in the signal lamp period is not on, the yellow lamp is in failure;
if the counted countdown lamp in the signal lamp period is not on, the countdown lamp is out of order; if the counted red light, green light, yellow light and countdown light in the signal lamp period are not on, the lamp is in full failure;
if the time T which is counted in the signal lamp period and is used for simultaneously lighting the red lamp and the yellow lamp is more than T, and T is a set time threshold, judging that the red lamp and the yellow lamp are in simultaneous lighting fault;
if the time T which is counted in the signal lamp period and is used for simultaneously lighting the red light and the green light is more than T, judging that the red light and the green light are simultaneously lighting faults;
if the time T which is counted in the signal lamp period and is when the yellow lamp and the green lamp are simultaneously on is more than T, judging that the yellow lamp and the green lamp are simultaneously on;
the traffic signal lamp statistics principle is as follows: recording a first signal lamp entering a statistics queue;
and calculating the distance d between the center of the signal lamp A entering the statistical queue and the center of the signal lamp existing in the statistical queue, and if d > (h+w)/4, adding the position information of the signal lamp A into the queue.
The beneficial effects of the invention are as follows: (1) The problem of small-range camera shake and position offset is effectively solved, the false alarm rate based on the fault of the video detection signal lamp is reduced, and the alarm accuracy is improved; (2) The position and state identification of the signal lamp are realized by adopting a deep learning technology based on a convolutional neural network, and the identification accuracy of the signal lamp is further improved by adopting an image processing algorithm based on the signal lamp period; (3) The traffic light position matching and the type calibration are carried out according to the signal light position information detected in the signal period of the intersection traffic signal machine, and the faults of the traffic light can be detected and diagnosed, so that the traffic light faults can be timely found by traffic facility maintenance personnel, and the traffic light can be timely maintained.
Drawings
Fig. 1 is a flow chart of a traffic light fault detection method integrating deep learning and image processing according to the first embodiment.
Fig. 2 is a diagram showing configuration information of a traffic signal lamp group according to the first embodiment
Fig. 3 is a configuration information diagram of a traffic signal according to the first embodiment.
Fig. 4 is an image processing schematic diagram of a traffic signal according to the first embodiment.
Fig. 5 is a schematic diagram of statistics of a traffic light lamp group according to the first embodiment.
Fig. 6 is a flow chart of an alarm for traffic light lamp group matching according to the first embodiment.
Detailed Description
The technical scheme of the invention is further specifically described below through examples and with reference to the accompanying drawings.
Embodiment one: a traffic signal lamp fault detection method integrating deep learning and image processing is shown in fig. 1, and comprises the following steps:
the method comprises the steps of (1-1) obtaining electronic police video streams under the conditions of sunny days, cloudy days and rainy days, and decoding the electronic police video streams to obtain traffic signal lamp image information, the position information of a traffic signal lamp group in the image, the type information and the position information of each signal lamp in the traffic signal lamp group;
(1-2) configuring traffic signal light group information and individual signal light information within the light group:
(1-2-1) configuring signal light group information:
setting a detection area and a lamp group area of a signal lamp group, and if the detected signal lamp position is in the set lamp group area, counting the type and position information of the signal lamp in the set lamp group area;
as shown in fig. 2, the area set when the traffic signal lamp group is configured by the mark of the present invention is relatively large in consideration of the jitter and offset of the video camera on site. The detection area of the left turn arrow lamp is shown as rectangular box 1. In the video image recognition, the center of the recognized signal lamp is in the rectangular frame 1, and the type and coordinate information of the signal lamp are counted in the left-turn signal lamp group 1. The detection area of the straight disc lamp is shown as a rectangular frame 2. In the video image recognition, the center of the recognized signal lamp is in the rectangular frame 2, and the type and coordinate information of the signal lamp are counted in the straight signal lamp group 2. It can be seen that there is an overlap area between the signal lamp bank wire frames which may lead to statistical information being present in both the lamp bank wire frames 1 and 2, the invention corrects this redundant information by matching.
(1-2-2) configuring Signal light information
Setting signal lamp external rectangular frames, wherein each signal lamp external rectangular frame contains position information and type information of the signal lamp, and the position information comprises two parts: (1) Center coordinates (x, y) of the signal in the traffic signal image; (2) the length h and width w of the circumscribed rectangle;
as shown in fig. 3, in the rectangular frame 1 of the signal lamp group, the invention sequentially adds the types and the position information of the red lamp, the yellow lamp, the green lamp and the countdown lamp. FIG. 3 shows the total configuration information contained in a light fixture, including the type, number, and detection zone coordinate range (rectangular box 1) information of the light fixture; type and coordinate information of each lamp in the lamp group;
(1-3) identification of traffic lights
(1-3-1) selecting a YOLOV3-tiny deep convolutional network as a traffic light target detection network;
the detection range of the signal lamp size is determined to be 8-40 x 40 by counting the signal lamp pixel level size under various large and small intersection scenes, and 6 anchors are determined by clustering the invention. The dark 19 adopted by YOLOV3-tiny is pooled through 5-layer maximum value, so that a large number of simple features which can be extracted and identified by a low-layer neural network in a picture are discarded, and the positioning accuracy and the identification accuracy of small targets such as signal lamps are poor. Through training and testing of a large number of test samples, the invention changes the scanning step length of the layer 1 Maxpool from 2 to 1, and increases the receptive field of the detection layer; meanwhile, the number of shallow convolution kernels is increased, and the positioning accuracy and the recognition accuracy of signal lamp detection are improved.
(1-3-2) manufacturing a training sample of the traffic signal lamp, sending the training sample into a traffic signal lamp target detection network, performing regression training of a traffic signal detection model, and generating a signal lamp detection model;
acquiring an electronic police video stream, acquiring traffic signal lamp images in batches from the video stream, ensuring that the size of a single traffic signal lamp in the image is more than or equal to 8 multiplied by 8 pixels, marking the position and the type of each traffic signal lamp in the image, generating a training sample, sending the training sample into a traffic signal lamp target detection network, carrying out regression training of a traffic signal detection model, and generating a signal lamp detection model;
(1-3-3) identifying the type of individual traffic signal lights and locating the individual traffic signal lights;
converting real-time electronic police video stream into RGB image data, sending the RGB image data into a signal lamp detection model, and detecting an external rectangular frame of a traffic signal lamp in the current RGB image and the type of the signal lamp obtained by recognition by the signal lamp detection model;
(1-4) image processing identification of traffic Signal
(1-4-1) obtaining a maximum pixel image max_img and a minimum pixel image min_img in the signal lamp area by utilizing the periodicity of the signal lamp and comparing and counting the pixel value of each position of the lamp group area in one signal period, wherein the minimum pixel image records the image information when the signal lamp is turned off, and the maximum pixel image records the image information when the signal lamp is turned on as shown in fig. 4;
(1-4-2) performing subtraction operation on the signal lamp area image img in the current period and the minimum pixel image min_img in the previous period to obtain a difference image diff_img;
(1-4-3) in each lamp group image, the most intense change part is the position of the lamp, the rest position changes more gently, so a proper threshold value thresh is set, a threshold value is set for the difference image, and image binarization operation is carried out, so that the pixels at the intense change position in the image are reserved, and the pixels with gentle change are all set to 0;
(1-5) counting the state of a traffic signal lamp group according to the signal lamp period;
the traffic signal lamp group comprises a red lamp, a yellow lamp, a green lamp and a countdown lamp; the red light indicates no traffic, the green light indicates no traffic, and the yellow light indicates warning;
counting red lights, green lights, yellow lights and countdown lights which appear in a signal lamp period according to the period length of a complete signal lamp period of the current intersection as a reference;
if the counted red light in the signal lamp period is not on, the red light is in failure;
if the counted green light in the signal lamp period is not on, the green light is in failure;
if the yellow lamp counted in the signal lamp period is not on, the yellow lamp is in failure;
if the counted countdown lamp in the signal lamp period is not on, the countdown lamp is out of order; if the counted red light, green light, yellow light and countdown light in the signal lamp period are not on, the lamp is in full failure;
if the time T which is counted in the signal lamp period and is used for simultaneously lighting the red lamp and the yellow lamp is more than T, and T is a set time threshold, judging that the red lamp and the yellow lamp are in simultaneous lighting fault;
if the time T which is counted in the signal lamp period and is used for simultaneously lighting the red light and the green light is more than T, judging that the red light and the green light are simultaneously lighting faults;
and if the time T which is counted in the signal lamp period and is when the yellow lamp and the green lamp are simultaneously on is more than T, judging that the yellow lamp and the green lamp are simultaneously on.
As shown in fig. 5, the rectangular frames normally detected by the traffic signal lamp detection model are projected onto the two-dimensional plane and arranged according to the image positions of the signal lamps; the statistical center distribution formed after the periodic statistics of the positions of each traffic signal lamp meets the normal distribution.
The traffic signal lamp statistics principle is as follows:
recording a first signal lamp entering a statistics queue;
and calculating the distance d between the center of the signal lamp A entering the statistical queue and the center of the signal lamp existing in the statistical queue, and if d > (h+w)/4, adding the position information of the signal lamp A into the queue.
As shown in fig. 6, in the whole matching process, only one group of traffic signal lamp statistics result is completely matched with the configuration information, so that the traffic signal lamp can be judged to be normal.
Determining a matching reference point in the traffic signal lamp configuration information:
in the traffic light detection, a large number of video detection verification shows that the green light detection error rate of the traffic light is very low, and the traffic light can be used as a matching reference point as long as the green light is detected. If the traffic signal lamp configuration information does not contain a green light, the whole lamp group is required to be matched completely so as to be normal.
(1-6) detecting a detection area in a traffic signal image by using a signal lamp detection model, and matching the detected position information and category information of each signal lamp with signal lamp configuration information in each signal lamp group (namely, the detected signal lamp center coordinates are positioned in a signal lamp circumscribed rectangle frame, and the category information is the same as the category information of the signal lamp circumscribed rectangle); and (5) judging the faults of the traffic signal lamps for the signal lamps which are not successfully matched.
(1-6-1) if the category information in the traffic signal lamp group configuration information has a green light, entering the following matching and fault judging process:
(1-6-1-1) counting the position information of the green light by taking a signal lamp period as a complete counting time;
(1-6-1-2) when the green light position information does not exist in the statistical information, the green light abnormal alarm enters an alarm queue;
(1-6-1-3) when the position information of the green light exists in the statistical information, the position information (x) of the circumscribed rectangle of the green light is obtained by using the signal lamp configuration information g ,y g W, h) and positional information (x) of circumscribed rectangle of red light r ,y r Calculating the distance d between the circumscribed rectangular frame of the red light and the circumscribed rectangular frame of the green light in the configuration information x1 ,d x2 Or d y1 ,d y2 ;
For horizontal lamp group d x1 =|x g -x r |,d x2 =|y g -y r |;
For a vertical lamp group d y1 =|y g -y r |,d y2 =|x g -x r |;
In the statistical information, there is the position information (x' g ,y’ g W, h) using d x1 ,d x2 Or d y1 ,d y2 Calculating the red light position (x' r ,y’ r ,w,h);
For horizontal lamp sets, x' r =x’ g -d x1 ,y’ r =y’ g -d x2 ;
For vertical lamp sets, y' r =y’ g -d y1 ,x’ r =x’ g -d y2 ;
If the red light position information (x) 1 ,y 1 W, h) and the calculated red light position (x' r ,y’ r W, h) are not matched, so that the abnormal alarm of the red light enters an alarm queue;
the matching conditions are as follows:
if (x' r -w/2)<x 1 <(x’ r +w/2),|y’ r -y 1 |<h/2 is considered as horizontal matching;
if (y' r -h 1 /2)<y 1 <(y’ r +h 1 /2),|x’ r -x 1 |<w/2 is considered to be a vertical match;
the horizontal lamp group needs to meet the requirement of horizontal matching, and the vertical lamp group needs to meet the requirement of vertical matching;
(1-5-1-4) when the position information of the green light exists in the statistical information, the position information (x) of the circumscribed rectangle of the green light can be obtained by using the signal lamp configuration information g ,y g W, h) and position information (x) of the circumscribed rectangle of the yellow light y ,y y Calculating the distance d between the circumscribed rectangular frame of the red light and the circumscribed rectangular frame of the yellow light in the configuration information x3 ,d x4 Or d y3 ,d y4 ;
For horizontal lamp group d x3 =|x g -x y |,d y4 =|y g -y y |;
For a vertical lamp group d y3 =|y g -y y |,d y4 =|x g -x y |;
When the green light position (x 'in the statistical information is obtained' g ,y’ g After w, h) using d x3 ,d x4 Or d y3, d y4 Calculate the yellow light position (x' y ,y’ y ,w,h);
For horizontal lamp sets, x' y =x’ g -d x3 ,y’ y =y’ g -d x4 ;
For vertical lamp sets, y' y =y’ g -d y3 ,x’ y =x’ g -d y4 ;
If the yellow light position information (x) 2 ,y 2 W, h) and the calculated yellow light position (x' y ,y’ y W, h) are not matched, so that abnormal alarm of the yellow lamp enters an alarm queue;
the matching conditions are as follows:
if (x' y -w/2)<x 2 <(x’ y +w/2),|y’ y -y 2 |<h/2 is considered as horizontal matching;
if (y' y -h/2)<y 2 <(y’ y +h/2),|x’ y -x 2 |<w/2 is considered to be a vertical match;
the horizontal lamp group needs to meet the requirement of horizontal matching, and the vertical lamp group needs to meet the requirement of vertical matching;
(1-6-1-5) when the position information of the green light exists in the statistical information, the position information (x) of the circumscribed rectangle of the green light is obtained by using the signal lamp configuration information g ,y g W, h) and position information (x) of circumscribed rectangle of countdown lamp c ,y c Calculating the distance d between the circumscribed rectangular frame of the countdown lamp and the circumscribed rectangular frame of the green lamp in the configuration information x5 ,d x6 Or d y5 ,d y6 ;
For horizontal lamp group d x5 =|x g -x c |,d x6 =|y g -y c |;
For a vertical lamp group d y5 =|y g -y c |,d y6 =|x g -x c |,;
When the green light position (x 'in the statistical information is obtained' g ,y’ g After w, h) using d x5 ,d x6 Or d y5 ,d y6 Calculating the position (x 'of the countdown lamp' c ,y’ c ,w,h);
For horizontal lamp sets, x' c =x’ g -d x5 ,y’ c =y’ g -d x6 ;
For vertical lamp sets, y' c =y’ g -d y5 ,y’ c =y’ g -d y6 ;
If the countdown light position information (x) 3 ,y 3 W, h) and the calculated countdown lamp position (x' c ,y’ c W, h) are not matched, so that the countdown lamp alarms abnormally and enters an alarm queue;
the matching conditions are as follows:
if (x' c -w/2)<x 3 <(x’ c +w/2),|y’ c -y 3 |<h/2 is considered as horizontal matching;
if (y' c -h 1 /2)<y 3 <(y’ c +h 1 /2),|x’ c -x 3 |<w/2 is considered to be a vertical match;
if the green light, the red light, the yellow light and the countdown light are all matched, deleting the alarm queue, and if the matching is completed, the traffic signal lamp is normal;
(1-6-2) if the green light does not exist in the category information in the traffic signal lamp group configuration information, entering the following matching and fault judging process:
the statistical information of the position and the state of the signal lamp detected in the signal period, wherein the position of a group of lamps can be completely matched with the configuration information of the traffic signal lamp group, and the group of signal lamp groups are normal; otherwise, making a judgment of abnormality of the signal lamp group.
The above-described embodiment is only a preferred embodiment of the present invention, and is not limited in any way, and other variations and modifications may be made without departing from the technical aspects set forth in the claims.
Claims (7)
1. The traffic signal lamp fault detection method integrating deep learning and image processing is characterized by comprising the following steps of:
step S1: obtaining electronic police video streams under various weather conditions, and decoding the electronic police video streams to obtain information comprising traffic signal lamp images, the position of a traffic signal lamp group in the images, the type information and the position information of each signal lamp in the traffic signal lamp group;
step S2: configuring traffic signal lamp group information and signal lamp information in the lamp group;
step S3: establishing a detection model of the traffic signal lamp;
step S4: carrying out image processing and identification on the traffic signal lamp;
step S5: counting the state of a traffic signal lamp group according to the signal lamp period;
step S6: detecting a detection area in a traffic signal lamp image by using a signal lamp detection model, and if the detection model fails to detect a signal lamp, detecting the position and the state of the signal lamp in the image by adopting an image processing method for the lamp group area; matching the detected position information and category information of each signal lamp with signal lamp information configured in each signal lamp group; making fault judgment of traffic signal lamps for the signal lamps which are not successfully matched;
the step S4 includes the steps of:
step S41: counting the maximum pixel image max_img and the minimum pixel image min_img in a signal lamp area in one period by utilizing the periodicity of the signal lamp;
step S42: subtracting the minimum pixel image min_img in the previous period from the signal lamp area image img in the current period to obtain a difference image diff_img;
step S43: setting a proper threshold value thresh, and performing image binarization operation on the difference image according to the threshold value to obtain the position information of the signal lamp;
step S44: matching the position information of the signal lamp with the configured position information of the signal lamp to obtain category information of the signal lamp;
the method for obtaining the maximum pixel image max_img and the minimum pixel image min_img in step S41 is as follows: starting from the current signal period T1, each signal period time is T, and the lamp group area image img (T) in each subsequent frame of RGB image is compared with the first frame of image by taking the lamp group area image img_1 in the first frame of RGB image as a reference, wherein the value range of T is [0, T ], and different T represents the lamp group area images img (T) corresponding to different moments;
let the minimum pixel image of the lamp group area be min img,
the maximum pixel image of the lamp group area is max_img;
and (3) making: min_img=img (0)
max_img=img(0)
In a signal period T, when
min_img [ i, j ] > img (t) [ i, j ] min_img [ i, j ] = img (t) [ i, j ];
max [ i, j ] < img (t) [ i, j ] img_max [ i, j ] =img (t) [ i, j ];
wherein [ i, j ] is the coordinates of a pixel in the image, and the value range of t is [0, T ];
when t=t, a minimum pixel image min_img and a maximum pixel image max_img in a T1 period are obtained;
the method for obtaining the position information of the signal lamp in step S43 includes: diff_img (t) obtained from step S42, a corresponding threshold value thresh is set
If the difference image diff _ img (t) [ i, j ] > = thresh,
diff_img (t) [ i, j ] =255;
whereas the difference image diff _ img (t) [ i, j ] < thresh,
diff_img (i, j) =0;
obtaining a binary image through the difference image, and carrying out contour detection and minimum circumscribed rectangle acquisition on a white pixel block in the binary image; setting the center of the obtained minimum circumscribed rectangle as (xi, yi), the length as hi and the width as wi;
in the information of the signal lamp which is arranged and configured,
the position information of the circumscribed rectangle of the red light is (xr, yr, w, h);
the position information of the circumscribed rectangle of the green light is (xg, yg, w, h);
the position information of the circumscribed rectangle of the yellow lamp is (xy, yy, w, h);
the position information of the circumscribed rectangle of the countdown lamp is (xc, yc, w, h);
if xr-w/2< xi < xr+w/2, yr-h/2< yi < yr+h/2,
the signal lamp identification result in the figure is a red lamp;
if xg-w/2< xi < xg+w/2, yg-h/2< yi < yg+h/2,
in the figure, the signal lamp identification result is green light;
if xy-w/2< xi < xy+w/2, yy-h/2< yi < yy+h/2,
the signal lamp identification result in the figure is yellow lamp;
if xc-w/2< xi < xc+w/2, yc-h/2< yi < yc+h/2, the signal lamp identification result in the figure is a countdown lamp;
the threshold thresh is calculated as follows:
thresh=mean+α*dev,
wherein mean is the mean value of diff_img (t), dev is the standard deviation of diff_img (t), alpha is a coefficient, and the value range of alpha is between [0,4 ].
2. The method for detecting a traffic light fault by combining deep learning with image processing according to claim 1, wherein the step S2 comprises the steps of:
step S21: configuring signal lamp group information; setting a detection area and a lamp group area of a signal lamp group, and if the detected signal lamp position is in the set lamp group area, counting the type and position information of the signal lamp in the set lamp group area;
step S22: configuring signal lamp information; and setting signal lamp external rectangular frames, wherein each signal lamp external rectangular frame contains position information and type information of the signal lamp.
3. The traffic light fault detection method of deep learning and image processing fusion according to claim 1 or 2, wherein the step S3 comprises the steps of:
step S31: selecting a YOLOV3-tiny deep convolution network as a traffic signal lamp target detection network;
step S32: manufacturing a training sample of the traffic signal lamp, sending the training sample into a traffic signal lamp target detection network, and performing classification regression training of a traffic signal detection model to generate the signal lamp detection model;
step S33: the type of individual traffic signal light is identified and the individual traffic signal light is located.
4. The method for detecting a traffic light fault by combining deep learning with image processing according to claim 1, wherein the step S6 comprises the steps of:
step S61: if the category information in the traffic signal lamp group configuration information has green light, entering the following matching and fault judging process:
step a: taking a signal lamp period as a complete statistical time, and counting the position information of the green lamp;
step b: when the statistical information does not contain green light position information, the green light abnormal alarm enters an alarm queue;
step c: when the statistical information includes the position information of the green light, the signal lamp configuration information is used to obtain the position information (x g ,y g W, h) and positional information (x) of circumscribed rectangle of red light r ,y r Calculating the distance d between the circumscribed rectangular frame of the red light and the circumscribed rectangular frame of the green light in the configuration information x1 ,d x2 Or d y1 ,d y2 ;
For horizontal lamp group d x1 =|x g -x r |,d x2 =|y g -y r |;
For a vertical lamp group d y1 =|y g -y r |,d y2 =|x g -x r |;
In the statistical information, there is the position information (x' g ,y’ g W, h) using d x1 ,d x2 Or d y1 ,d y2 Calculating the red light position (x' r ,y’ r ,w,h);
For horizontal lamp sets, x' r =x’ g -d x1 ,y’ r =y’ g -d x2 ;
For vertical lamp sets, y' r =y’ g -d y1 ,x’ r =x’ g -d y2 ;
If the red light position information (x) 1 ,y 1 W, h) and the calculated red light position (x' r ,y’ r W, h) are not matched, so that the abnormal alarm of the red light enters an alarm queue;
the matching conditions are as follows:
if (x' r -w/2)<x 1 <(x’ r +w/2),|y’ r -y 1 |<h/2 is considered as horizontal matching;
if (y' r -h 1 /2)<y 1 <(y’ r +h 1 /2),|x’ r -x 1 |<w/2 is considered to be a vertical match;
the horizontal lamp group needs to meet the requirement of horizontal matching, and the vertical lamp group needs to meet the requirement of vertical matching;
step d: when the statistical information contains the position information of the green light, the position information (x) of the circumscribed rectangle of the green light can be obtained by utilizing the signal lamp configuration information g ,y g W, h) and position information (x) of the circumscribed rectangle of the yellow light y ,y y Calculating the distance d between the circumscribed rectangular frame of the red light and the circumscribed rectangular frame of the yellow light in the configuration information x3 ,d x4 Or d y3 ,d y4 ;
For horizontal lamp group d x3 =|x g -x y |,d y4 =|y g -y y |;
For a vertical lamp group d y3 =|y g -y y |,d y4 =|x g -x y |;
When the green light position (x 'in the statistical information is obtained' g ,y’ g After w, h) using d x3 ,d x4 Or d y3, d y4 Calculate the yellow light position (x' y ,y’ y ,w,h);
For horizontal lamp sets, x' y =x’ g -d x3 ,y’ y =y’ g -d x4 ;
For vertical lamp sets, y' y =y’ g -d y3 ,x’ y =x’ g -d y4 ;
If the yellow light position information (x) 2 ,y 2 W, h) and the calculated yellow light position (x' y ,y’ y W, h) are not matched, so that abnormal alarm of the yellow lamp enters an alarm queue;
the matching conditions are as follows:
if (x' y -w/2)<x 2 <(x’ y +w/2),|y’ y -y 2 |<h/2 is considered as horizontal matching;
if (y' y -h/2)<y 2 <(y’ y +h/2),|x’ y -x 2 |<w/2 is considered to be a vertical match;
the horizontal lamp group needs to meet the requirement of horizontal matching, and the vertical lamp group needs to meet the requirement of vertical matching;
step e: when the statistical information includes the position information of the green light, the signal lamp configuration information is used to obtain the position information (x g ,y g W, h) and position information (x) of circumscribed rectangle of countdown lamp c ,y c Calculating the distance d between the circumscribed rectangular frame of the countdown lamp and the circumscribed rectangular frame of the green lamp in the configuration information x5 ,d x6 Or d y5 ,d y6 ;
For horizontal lamp group d x5 =|x g -x c |,d x6 =|y g -y c |;
For a vertical lamp group d y5 =|y g -y c |,d y6 =|x g -x c |;
When the green light position (x 'in the statistical information is obtained' g ,y’ g After w, h) using d x5 ,d x6 Or d y5 ,d y6 Calculating the position (x 'of the countdown lamp' c ,y’ c ,w,h);
For horizontal lamp sets, x' c =x’ g -d x5 ,y’ c =y’ g -d x6 ;
For vertical lamp sets, y' c =y’ g -d y5 ,y’ c =y’ g -d y6 ;
If the countdown light position information (x) 3 ,y 3 W, h) and the calculated countdown lamp position (x' c ,y’ c W, h) are not matched, so that the countdown lamp alarms abnormally and enters an alarm queue;
the matching conditions are as follows:
if (x' c -w/2)<x 3 <(x’ c +w/2),|y’ c -y 3 |<h/2 is considered as horizontal matching;
if (y' c -h 1 /2)<y 3 <(y’ c +h 1 /2),|x’ c -x 3 |<w/2 is considered to be a vertical match;
if the green light, the red light, the yellow light and the countdown light are all matched, deleting the alarm queue, and if the matching is completed, the traffic signal lamp is normal;
step S62: if the category information in the traffic signal lamp group configuration information does not have a green light, entering the following matching and fault judging process:
the statistical information of the position and the state of the signal lamp detected in the signal period, wherein the position of a group of lamps can be completely matched with the configuration information of the traffic signal lamp group, and the group of signal lamp groups are normal; otherwise, making a judgment of abnormality of the signal lamp group.
5. The method for detecting a traffic light fault by combining deep learning with image processing according to claim 3, wherein the method for identifying the type of the single traffic light in step S33 is as follows: and converting the real-time electronic police video stream into RGB image data, sending the RGB image data into a signal lamp detection model, and detecting an external rectangular frame of a traffic signal lamp in the current RGB image and the type of the signal lamp obtained by recognition by the signal lamp detection model.
6. The method for detecting a traffic light fault by combining deep learning with image processing according to claim 1, wherein the method for acquiring the difference image diff_img in step S42 is as follows: the image in the light group area of each frame of RGB image in the current period T2 is img (T), the light group area minimum pixel image min_img obtained in the previous period T1 is obtained through subtraction operation, and a difference image diff_img (T) =img (T) -min_img is obtained.
7. The method for detecting a traffic signal fault by combining deep learning with image processing according to claim 1 or 2, wherein the method for acquiring the state of one traffic signal lamp group in the signal lamp period in step S5 is as follows: the traffic signal lamp group comprises a red lamp, a yellow lamp, a green lamp and a countdown lamp; the red light indicates no traffic, the green light indicates no traffic, and the yellow light indicates warning; counting red lights, green lights, yellow lights and countdown lights which appear in a signal lamp period according to the period length of a complete signal lamp period of the current intersection as a reference;
if the counted red light in the signal lamp period is not on, the red light is in failure;
if the counted green light in the signal lamp period is not on, the green light is in failure;
if the yellow lamp counted in the signal lamp period is not on, the yellow lamp is in failure;
if the counted countdown lamp in the signal lamp period is not on, the countdown lamp is out of order; if the counted red light, green light, yellow light and countdown light in the signal lamp period are not on, the lamp is in full failure;
if the time T which is counted in the signal lamp period and is used for simultaneously lighting the red lamp and the yellow lamp is more than T, and T is a set time threshold, judging that the red lamp and the yellow lamp are in simultaneous lighting fault;
if the time T which is counted in the signal lamp period and is used for simultaneously lighting the red light and the green light is more than T, judging that the red light and the green light are simultaneously lighting faults;
and if the time T which is counted in the signal lamp period and is when the yellow lamp and the green lamp are simultaneously on is more than T, judging that the yellow lamp and the green lamp are simultaneously on.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010865255.XA CN112149509B (en) | 2020-08-25 | 2020-08-25 | Traffic signal lamp fault detection method integrating deep learning and image processing |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010865255.XA CN112149509B (en) | 2020-08-25 | 2020-08-25 | Traffic signal lamp fault detection method integrating deep learning and image processing |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112149509A CN112149509A (en) | 2020-12-29 |
CN112149509B true CN112149509B (en) | 2023-05-09 |
Family
ID=73888944
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202010865255.XA Active CN112149509B (en) | 2020-08-25 | 2020-08-25 | Traffic signal lamp fault detection method integrating deep learning and image processing |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112149509B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113129591B (en) * | 2021-04-13 | 2022-07-08 | 江苏智通交通科技有限公司 | Traffic signal lamp fault detection method based on deep learning target detection |
CN113194589B (en) * | 2021-04-22 | 2021-12-28 | 九州云(北京)科技发展有限公司 | Airport navigation aid light single lamp fault monitoring method based on video analysis |
CN115083195A (en) * | 2022-06-09 | 2022-09-20 | 成都华凯达交通设施有限公司 | Intelligent monitoring and analyzing system for intelligent traffic facility faults based on digitization |
CN114821451B (en) * | 2022-06-28 | 2022-09-20 | 南开大学 | Offline target detection method and system for traffic signal lamp video |
CN117475411B (en) * | 2023-12-27 | 2024-03-26 | 安徽蔚来智驾科技有限公司 | Signal lamp countdown identification method, computer readable storage medium and intelligent device |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6320981B1 (en) * | 1997-08-28 | 2001-11-20 | Fuji Xerox Co., Ltd. | Image processing system and image processing method |
CN101727573A (en) * | 2008-10-13 | 2010-06-09 | 汉王科技股份有限公司 | Method and device for estimating crowd density in video image |
CN103886344A (en) * | 2014-04-14 | 2014-06-25 | 西安科技大学 | Image type fire flame identification method |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6987882B2 (en) * | 2002-07-01 | 2006-01-17 | Xerox Corporation | Separation system for Multiple Raster Content (MRC) representation of documents |
CN109636777A (en) * | 2018-11-20 | 2019-04-16 | 广州方纬智慧大脑研究开发有限公司 | A kind of fault detection method of traffic lights, system and storage medium |
CN111275696B (en) * | 2020-02-10 | 2023-09-15 | 腾讯医疗健康(深圳)有限公司 | Medical image processing method, image processing method and device |
CN111428647B (en) * | 2020-03-25 | 2023-07-07 | 浙江中控信息产业股份有限公司 | Traffic signal lamp fault detection method |
-
2020
- 2020-08-25 CN CN202010865255.XA patent/CN112149509B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6320981B1 (en) * | 1997-08-28 | 2001-11-20 | Fuji Xerox Co., Ltd. | Image processing system and image processing method |
CN101727573A (en) * | 2008-10-13 | 2010-06-09 | 汉王科技股份有限公司 | Method and device for estimating crowd density in video image |
CN103886344A (en) * | 2014-04-14 | 2014-06-25 | 西安科技大学 | Image type fire flame identification method |
Also Published As
Publication number | Publication date |
---|---|
CN112149509A (en) | 2020-12-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112149509B (en) | Traffic signal lamp fault detection method integrating deep learning and image processing | |
CN111428647B (en) | Traffic signal lamp fault detection method | |
CN110197589B (en) | Deep learning-based red light violation detection method | |
US20180060986A1 (en) | Information processing device, road structure management system, and road structure management method | |
CN109636777A (en) | A kind of fault detection method of traffic lights, system and storage medium | |
CN106504580A (en) | A kind of method for detecting parking stalls and device | |
CN110782692A (en) | Signal lamp fault detection method and system | |
CN111950536A (en) | Signal lamp image processing method and device, computer system and road side equipment | |
CN111753612A (en) | Method and device for detecting sprinkled object and storage medium | |
CN113903008A (en) | Ramp exit vehicle violation identification method based on deep learning and trajectory tracking | |
CN109740412A (en) | A kind of signal lamp failure detection method based on computer vision | |
CN104134350A (en) | Intelligent dome camera system for traffic violation behavior recognition | |
CN114627435A (en) | Intelligent light adjusting method, device, equipment and medium based on image recognition | |
CN112084892A (en) | Road abnormal event detection management device and method thereof | |
Yin et al. | Promoting Automatic Detection of Road Damage: A High-Resolution Dataset, a New Approach, and a New Evaluation Criterion | |
CN116958764A (en) | Method for detecting data of mounting equipment based on deep learning | |
JP2004086417A (en) | Method and device for detecting pedestrian on zebra crossing | |
CN111339834B (en) | Method for identifying vehicle driving direction, computer device and storage medium | |
TWI743637B (en) | Traffic light recognition system and method thereof | |
CN110070724A (en) | A kind of video monitoring method, device, video camera and image information supervisory systems | |
CN110826456A (en) | Countdown board fault detection method and system | |
CN112528944A (en) | Image identification method and device, electronic equipment and storage medium | |
CN111681442A (en) | Signal lamp fault detection device based on image classification algorithm | |
CN111291722A (en) | Vehicle weight recognition system based on V2I technology | |
Rachman et al. | Camera Self-Calibration: Deep Learning from Driving Scenes |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 310053 23-25, 2 building, 352 BINKANG Road, Binjiang District, Hangzhou, Zhejiang. Applicant after: Zhejiang zhongkong Information Industry Co.,Ltd. Address before: 310053 23-25, 2 building, 352 BINKANG Road, Binjiang District, Hangzhou, Zhejiang. Applicant before: ZHEJIANG SUPCON INFORMATION TECHNOLOGY Co.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |