CN109389079B - Traffic signal lamp identification method - Google Patents

Traffic signal lamp identification method Download PDF

Info

Publication number
CN109389079B
CN109389079B CN201811158223.5A CN201811158223A CN109389079B CN 109389079 B CN109389079 B CN 109389079B CN 201811158223 A CN201811158223 A CN 201811158223A CN 109389079 B CN109389079 B CN 109389079B
Authority
CN
China
Prior art keywords
traffic signal
target
signal lamp
traffic
undetermined
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811158223.5A
Other languages
Chinese (zh)
Other versions
CN109389079A (en
Inventor
王莹
曹亮
张美娟
丁鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuxi Institute of Technology
Original Assignee
Wuxi Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuxi Institute of Technology filed Critical Wuxi Institute of Technology
Priority to CN201811158223.5A priority Critical patent/CN109389079B/en
Publication of CN109389079A publication Critical patent/CN109389079A/en
Application granted granted Critical
Publication of CN109389079B publication Critical patent/CN109389079B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/28Quantising the image, e.g. histogram thresholding for discrimination between background and foreground patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/30Noise filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • G06V10/422Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation for representing the structure of the pattern or shape of an object therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/09Recognition of logos

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a traffic signal lamp identification method, and belongs to the field of image processing. The method comprises the steps of obtaining a traffic scene image; detecting the traffic scene image by using a pre-training model to obtain N pieces of information of undetermined traffic signal lamps; m target traffic signal lamps are selected from the N pending traffic signal lamps; aiming at each target traffic signal lamp, acquiring a maximum connected region of the target traffic signal lamp in a traffic scene image and a binary image corresponding to the maximum connected region to determine the color of the target traffic signal lamp; determining the graph of the target traffic signal lamp according to the binarization graph and the binarization template image corresponding to the maximum communication area; determining the state information of the target traffic signal lamp according to the color and the graph of the target traffic signal lamp; the method solves the problems that the existing method for identifying the state of the traffic signal lamp is complex and is difficult to meet the real-time requirement, and achieves the effects of eliminating interference when identifying a single traffic signal lamp, good real-time performance and simple application.

Description

Traffic signal lamp identification method
Technical Field
The embodiment of the invention relates to the field of image processing, in particular to a traffic signal lamp identification method.
Background
With the development of science and technology, the automatic driving technology is becoming more practical. In order to ensure the safety of automatic driving, accurately identifying the traffic signal lamp is an important function that needs to be achieved by an automatic driving system.
In the related art, the traffic signal is mostly recognized by using a conventional image processing method, for example, the state information of the traffic signal is obtained by segmenting and matching the color or the brightness according to the inherent characteristics of the traffic signal. However, the urban environment where the traffic signal lamp is located is complex, and due to the influence of weather, light, obstacles, pedestrians and vehicles, the recognition result is interfered, so that the accuracy of the state information of the recognized traffic signal lamp is not high.
Disclosure of Invention
In order to solve the problems in the prior art, the embodiment of the invention provides a traffic signal lamp identification method. The technical scheme is as follows:
in a first aspect, a traffic signal light identification method is provided, which includes:
acquiring a traffic scene image;
detecting the traffic scene image by using a pre-training model to obtain N pieces of information of undetermined traffic signal lamps; the pre-training model is obtained by training a YOLO V3 model based on a COCO database, and information of one undetermined traffic signal lamp corresponds to one undetermined traffic signal lamp;
according to the information of the undetermined traffic signal lamps and preset position parameters, M target traffic signal lamps are selected from the N undetermined traffic signal lamps; n and M are integers, and N is greater than M;
aiming at each target traffic signal lamp, acquiring a maximum connected region of the target traffic signal lamp in a traffic scene image and a binary image corresponding to the maximum connected region;
taking the color with the largest number of pixel points in the largest connected region as the color of the target traffic signal lamp;
determining a figure of the target traffic signal lamp according to the binarization figure and the binarization template image corresponding to the maximum communication area, wherein the binarization template image comprises a circular lamp, a left-turning arrow, a right-turning arrow and a straight arrow;
determining the state information of the target traffic signal lamp according to the color and the graph of the target traffic signal lamp;
the information of the traffic signal lamp to be determined comprises an abscissa and an ordinate of a top point of the traffic signal lamp to be determined at the upper left of a target area in the traffic scene image, the width and the height of the target area, and a transverse proportion and a longitudinal proportion of the target area in the traffic scene image.
Optionally, according to the information of the pending traffic signal lamps and the preset position parameters, M target traffic signal lamps are selected from the N pending traffic signal lamps, including:
acquiring P undetermined traffic signal lamps from the N undetermined traffic signal lamps, wherein the transverse proportion of the P undetermined traffic signal lamps is in a preset transverse proportion range, and the longitudinal proportion of the P undetermined traffic signal lamps is in a preset longitudinal proportion range; p is an integer, and P is less than N;
aiming at P traffic signal lamps to be determined, sequencing the traffic signal lamps to be determined from large to small in the area of a target area in a traffic scene image;
and taking the front M undetermined traffic signal lamps as target traffic signal lamps.
Optionally, obtaining a maximum connected region of the target traffic signal lamp in the traffic scene image and a binarized image corresponding to the maximum connected region includes:
acquiring a binary image of a target area of a target traffic signal lamp in a traffic scene image;
removing noise points in the binarized image of the target area;
and acquiring the maximum connected region and the binarized image corresponding to the maximum connected region from the binarized image of the target region.
Optionally, the color with the largest number of pixels in the largest connected region is used as the color of the target traffic signal lamp, including:
judging the color of each pixel point in the maximum connected region according to the following formula:
Figure BDA0001819419530000021
counting the number of pixel points corresponding to each color;
detecting whether the ratio of the maximum number of the pixel points to the total number of the pixel points in the maximum communication area is larger than a preset value or not;
if the ratio of the maximum number of the pixel points to the total number of the pixel points in the maximum communication area is larger than a preset value, taking the color of the pixel points with the maximum number as the color of the target traffic signal lamp;
wherein z isi(x,y)Representing the color of pixel point i with coordinates (x, y).
Optionally, determining the graph of the target traffic signal lamp according to the binarization graph and the binarization template image corresponding to the maximum connected region, including:
aiming at each target traffic signal lamp, converting each binary template image into a template image with the same size as the binary image corresponding to the maximum communication area;
and calculating the square sum of the pixel point number difference between the binarized image corresponding to the maximum connected region and each template image according to the following formula:
Figure BDA0001819419530000031
taking the square sum of the minimum pixel point number difference and the corresponding binary template image as the graph of the target traffic signal lamp;
wherein D isjIndicating the sum of squares, bw, of the differences in the number of pixel points between the binarized image corresponding to the largest connected region and the ith binarized target imagetarget(x,y)A binarized image, bw, representing the maximum connected region of the target traffic signal lamptemplet(x,y)The ith binarization target image is represented, and (x, y) represents the coordinates of pixel points.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
the traffic signal lamp in a traffic scene image is primarily identified by adopting a YOLO V3 frame based on a training model of a COCO data set, the result of primary identification is screened according to the regional size and the position information of the traffic signal lamp, the effective region of the traffic signal lamp is protected, the effective region containing the traffic signal lamp is secondarily identified, the color and the graphic characteristics of the traffic signal lamp are obtained, and therefore the state information of the traffic signal lamp is obtained.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flow diagram illustrating a traffic signal identification method in accordance with an exemplary embodiment;
FIG. 2 is a flow chart illustrating a traffic signal identification method according to another exemplary embodiment;
FIG. 3 is a diagram illustrating a binarized template image according to an exemplary embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
As shown in fig. 1, the method for identifying a traffic signal according to an embodiment of the present invention includes the following steps:
step 101, acquiring a traffic scene image.
And 102, detecting the traffic scene image by using a pre-training model to obtain N pieces of information of the to-be-determined traffic signal lamp.
The pre-training model is a model trained based on the COCO database using the YOLO V3 model.
And one piece of information of the undetermined traffic signal lamp corresponds to one undetermined traffic signal lamp.
And the confidence coefficient of the information of the undetermined traffic signal lamp is greater than the preset confidence coefficient, and optionally, the preset confidence coefficient is 65%.
N is an integer.
The information of the undetermined traffic signal lamp comprises an abscissa and an ordinate of a top point of the undetermined traffic signal lamp at the upper left of a target area in the traffic scene image, the width and the height of the undetermined traffic signal lamp in the target area in the traffic scene image, and the transverse proportion and the longitudinal proportion of the undetermined traffic signal lamp in the traffic scene image.
The transverse proportion of the traffic signal lamp to be determined in the traffic scene image is the ratio of the abscissa of the top point of the traffic signal lamp to be determined at the upper left of the target area in the traffic scene image to the width of the traffic scene image.
The longitudinal proportion of the traffic signal lamp to be determined in the traffic scene image is the ratio of the ordinate of the top point of the traffic signal lamp to be determined at the upper left of the target area in the traffic scene image to the height of the traffic scene image.
And 103, selecting M target traffic signal lamps from the N undetermined traffic signal lamps according to the undetermined traffic signal lamp information and the preset position parameters.
M is less than N, and M is a positive integer.
And 104, acquiring the maximum connected region of each target traffic signal lamp in the traffic scene image and the binary image corresponding to the maximum connected region.
And 105, taking the color with the largest number of pixel points in the maximum communication area as the color of the target traffic signal lamp.
For example, the colors of the target traffic signal lamp are determined to be red, yellow and green.
And 106, determining the graph of the target traffic signal lamp according to the binarization graph and the binarization template image corresponding to the maximum communication area.
The binary template image comprises a circular lamp, a left-turning arrow, a right-turning arrow and a straight arrow.
And step 107, determining the state information of the target traffic light according to the color and the graph of the target traffic light.
Such as: the color of the target traffic signal lamp is green, the graph is a circular lamp, and the state information of the target traffic signal lamp is a circular green lamp; or the color of the target traffic light is red, the graph is a left-turning arrow, and the state information of the target traffic light is left-turning straight-going.
In summary, in the traffic signal lamp identification method provided by the embodiment of the invention, the traffic signal lamp in the traffic scene image is primarily identified by adopting the YOLO V3 framework based on the training model of the COCO data set, the result of the primary identification is screened according to the size and the position information of the traffic signal lamp area to obtain the effective area for protecting the traffic signal lamp, and the effective area containing the traffic signal lamp is secondarily identified to obtain the color and the graphic characteristics of the traffic signal lamp, so that the state information of the traffic signal lamp is obtained.
Referring to fig. 2, a flow chart of a traffic signal light identification method according to another embodiment of the invention is shown. As shown in fig. 2, the traffic signal light recognition method may include the steps of:
step 201, acquiring a traffic scene image.
Step 202, detecting the traffic scene image by using the pre-training model to obtain the information of the N undetermined traffic signal lamps.
The pre-training model is a model trained based on the COCO database using the YOLO V3 model.
And one piece of information of the undetermined traffic signal lamp corresponds to one undetermined traffic signal lamp. And the confidence coefficient of the information of the undetermined traffic signal lamp is greater than the preset confidence coefficient, and optionally, the preset confidence coefficient is 65%.
N is an integer.
The information of the undetermined traffic signal lamp comprises an abscissa and an ordinate of a top point of the undetermined traffic signal lamp at the upper left of a target area in the traffic scene image, the width and the height of the undetermined traffic signal lamp in the target area in the traffic scene image, and the transverse proportion and the longitudinal proportion of the undetermined traffic signal lamp in the traffic scene image.
The transverse proportion of the traffic signal lamp to be determined in the traffic scene image is the ratio of the abscissa of the top point of the traffic signal lamp to be determined at the upper left of the target area in the traffic scene image to the width of the traffic scene image.
The longitudinal proportion of the traffic signal lamp to be determined in the traffic scene image is the ratio of the ordinate of the top point of the traffic signal lamp to be determined at the upper left of the target area in the traffic scene image to the height of the traffic scene image.
In one example, a traffic scene graph is detected by using a pre-training model, and the obtained N pieces of information of the pending traffic signal lamps are shown as table one.
Watch 1
Figure BDA0001819419530000061
The left represents the abscissa of the top point of the target area of the traffic signal lamp to be determined in the traffic scene image; top represents the vertical coordinate of the top point of the traffic signal lamp to be determined at the upper left of the target area in the traffic scene image; w represents the width of a target area of the traffic signal lamp to be determined in the traffic scene image; h represents the height of a target area of the traffic signal lamp to be determined in the traffic scene image; lp represents the transverse proportion of a target area of the traffic signal lamp to be determined in the traffic scene image, and lp is left/width of the traffic scene image; tp is top/height of the traffic scene image.
And step 203, acquiring P undetermined traffic signal lamps from the N undetermined traffic signal lamps, wherein the transverse proportion of the P undetermined traffic signal lamps is within a preset transverse proportion range, and the longitudinal proportion of the P undetermined traffic signal lamps is within a preset longitudinal proportion range.
P is a positive integer, and P is less than N.
Since the appearance position of the traffic signal lamp is generally the middle upper part of the traffic scene graph when the vehicle runs normally, a preset transverse proportion range and a preset longitudinal proportion range are set to eliminate targets outside the attention area.
Optionally, the predetermined lateral ratio ranges from 0.2 to 0.8.
Optionally, the predetermined longitudinal proportion ranges from 0 to 0.65.
In one example, the predetermined horizontal ratio range is 0.2-0.8, the predetermined longitudinal ratio range is 0-0.65, and 2 undetermined traffic signal lamps are selected from the 4 undetermined traffic signal lamps shown in the table one, namely the undetermined traffic signal lamp 3 and the undetermined traffic signal lamp 4.
And 204, aiming at the P traffic signal lamps to be determined, sequencing the traffic signal lamps to be determined from large to small in the area of the target area in the traffic scene image.
The area of a target area of the traffic signal lamp to be determined in the traffic scene image is the product of the height of the target area and the width of the target area.
In one example, in the traffic signal information to be specified shown in table one, the area of the target region of the signal lamp 3 is 1560, the area of the target region of the signal lamp 4 is 1701, and the traffic signal lamp to be specified 4 and the traffic signal lamp 3 to be specified are arranged in descending order.
And step 205, taking the front M undetermined traffic signal lamps as target traffic signal lamps.
M is a positive integer. Optionally, M is 4.
When M is larger than P, all the P undetermined traffic signal lamps are used as target traffic signal lamps; and when M is smaller than P, taking the first M traffic signal lamps in the sequenced P undetermined traffic signal lamps as target traffic signal lamps.
In one example, the first 4 pending traffic lights are used as target traffic lights to obtain 2 target traffic lights, and the target traffic light information is shown in table two.
Watch two
Figure BDA0001819419530000071
The left represents the abscissa of the top point of the target area of the traffic signal lamp to be determined in the traffic scene image; top represents the vertical coordinate of the top point of the traffic signal lamp to be determined at the upper left of the target area in the traffic scene image; w represents the width of a target area of the traffic signal lamp to be determined in the traffic scene image; h represents the height of the target area of the traffic signal lamp to be determined in the traffic scene image.
And step 206, acquiring a binary image of the target traffic signal lamp in the traffic scene image aiming at each target traffic signal lamp.
Optionally, a binarization threshold value is obtained by using a maximum inter-class variance method, and binarization is performed on a target area of the target traffic signal lamp in the traffic scene image.
And step 207, removing noise in the binarized image of the target region.
Optionally, an open operation is applied to remove noise.
And step 208, acquiring the maximum connected region and the binarized image corresponding to the maximum connected region from the binarized image of the target region.
And step 209, judging the color of each pixel point in the maximum communication area according to the formula I.
Figure BDA0001819419530000081
Wherein z isi(x,y)Representing the color of pixel point i with coordinates (x, y).
When z isi(x,y)When red is obtained, the color of the pixel point i with the coordinate of (x, y) is red; when z isi(x,y)When the pixel point i is yellow, the color of the pixel point i with the coordinate (x, y) is yellow; when z isi(x,y)When green, the color of the pixel point i with coordinates (x, y) is green.
Step 210, counting the number of pixel points corresponding to each color.
Step 211, detecting whether the ratio of the maximum number of the pixel points to the total number of the pixel points in the maximum connected region is greater than a predetermined value.
Optionally, the predetermined value is 0.3.
If the ratio of the maximum number of the pixel points to the total number of the pixel points in the maximum connected region is greater than the preset value, indicating that the color is effective, and executing step 212; and if the ratio of the maximum number of the pixel points to the total number of the pixel points in the maximum communication area is not more than a preset value, indicating that the color is invalid, and determining the state of the traffic signal lamp as off.
And step 212, taking the color of the pixel points with the maximum number as the color of the target traffic signal lamp.
It should be noted that, steps 207 to 212 are performed for each target traffic signal lamp.
And step 213, converting each binary template image into a template image with the same size as the binary image corresponding to the maximum connected region for each target traffic signal lamp.
The binarized template image includes a circular lamp, a left-turn arrow, a right-turn arrow and a straight arrow, as shown in fig. 3.
And 214, calculating the square sum of the pixel point quantity difference between the binarized image corresponding to the maximum connected region and each template image according to a formula II.
Figure BDA0001819419530000091
Sum of squares, bw, of differences in the number of inter-pixel pointstarget(x,y)A binarized image, bw, representing the maximum connected region of the target traffic signal lamptemplet(x,y)Represents the ith binarization target image, and (x, y) represents pixelsThe coordinates of the points.
And calculating the sum of squares of the pixel point number difference between the binarized image corresponding to the maximum connected region of each target traffic signal lamp and each template image.
And step 215, taking the binarized template image corresponding to the sum of squares of the minimum pixel point number difference as the graph of the target traffic signal lamp.
It should be noted that, steps 214 to 215 are performed for each target traffic signal lamp.
And step 216, determining the state information of the target traffic light according to the color and the graph of the target traffic light.
After the color and the graph of each template traffic signal lamp are obtained, sequencing according to the abscissa of the top point of the target traffic signal lamp at the upper left of the target area in the traffic scene image to obtain a target traffic signal lamp combination table; and obtaining an information combination table comprising the number of the corresponding traffic lights according to the target traffic light combination table.
In one example, the target traffic light 1 and the target traffic light 2 in table two are subjected to color and graphic judgment to obtain table three.
Watch III
Figure BDA0001819419530000092
Figure BDA0001819419530000101
And obtaining a target traffic signal lamp combination table according to the third table, wherein the table is shown in the fourth table.
Watch four
Figure BDA0001819419530000102
And obtaining an information combination table comprising the number of the corresponding traffic lights according to the target traffic light combination table, wherein the information combination table is shown in a fifth table.
Watch five
Left 1 Left 2 Left 3 Left 4
Single signal lamp —— —— —— ——
Two signal lamps Left turn green light Round red light —— ——
Three signal lamps —— —— —— ——
Four signal lamps —— —— —— ——
The traffic signal lamp identification method provided by the embodiment of the invention overcomes the defects of more manual intervention and easy environmental interference of the traditional method due to the manual design characteristics based on deep learning, provides a secondary detection framework based on the traditional model aiming at the defect that the traditional deep learning method needs to specially manufacture a traffic signal lamp training model, and has the advantages of simple implementation and application, better real-time performance and stronger robustness.
It should be noted that: the above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.

Claims (4)

1. A traffic signal identification method, the method comprising:
acquiring a traffic scene image;
detecting the traffic scene image by using a pre-training model to obtain N pieces of information of the undetermined traffic signal lamps; the pre-training model is obtained by training a YOLO V3 model based on a COCO database, and one piece of information of the undetermined traffic signal lamp corresponds to one undetermined traffic signal lamp;
according to the information of the undetermined traffic signal lamps and preset position parameters, M target traffic signal lamps are selected from the N undetermined traffic signal lamps; n and M are integers, and N is greater than M;
aiming at each target traffic signal lamp, acquiring a maximum connected region of the target traffic signal lamp in the traffic scene image and a binary image corresponding to the maximum connected region;
taking the color with the largest number of pixel points in the maximum communication area as the color of the target traffic signal lamp;
determining the figure of the target traffic signal lamp according to the binarization figure and the binarization template image corresponding to the maximum communication area, wherein the binarization template image comprises a circular lamp, a left-turning arrow, a right-turning arrow and a straight arrow;
determining the state information of the target traffic signal lamp according to the color and the graph of the target traffic signal lamp;
the information of the traffic signal lamp to be determined comprises an abscissa and an ordinate of a top point of the traffic signal lamp to be determined at the upper left of a target area in the traffic scene image, the width and the height of the target area, and a transverse proportion and a longitudinal proportion of the target area in the traffic scene image;
the step of determining the graph of the target traffic signal lamp according to the binarization graph and the binarization template image corresponding to the maximum communication area comprises the following steps:
aiming at each target traffic signal lamp, converting each binaryzation template image into a template image with the same size as the binaryzation image corresponding to the maximum communication area;
and calculating the square sum of the pixel point number difference between the binarized image corresponding to the maximum connected region and each template image according to the following formula:
Figure FDA0003289976970000011
taking the binary template image corresponding to the sum of squares of the minimum pixel point number difference as the graph of the target traffic signal lamp;
wherein,DjIndicating the sum of squares, bw, of the difference in the number of pixel points between the binarized image corresponding to the largest connected region and the ith binarized target imagetarget(x,y)A binarized image, bw, representing the maximum connected region of the target traffic signal lamptemplet(x,y)The ith binarization target image is represented, and the (x, y) represents the coordinates of the pixel points.
2. The method of claim 1, wherein the extracting M target traffic signals from the N pending traffic signals according to the pending traffic signal information and the preset position parameter comprises:
acquiring P undetermined traffic signal lamps from the N undetermined traffic signal lamps, wherein the transverse proportion of the P undetermined traffic signal lamps is in a preset transverse proportion range, and the longitudinal proportion of the P undetermined traffic signal lamps is in a preset longitudinal proportion range; p is an integer, and P is less than N;
aiming at the P undetermined traffic signal lamps, sequencing the P undetermined traffic signal lamps from large to small according to the area of a target area of the undetermined traffic signal lamps in the traffic scene image;
and taking the front M undetermined traffic signal lamps as target traffic signal lamps.
3. The method according to claim 1, wherein the acquiring a maximum connected region of the target traffic signal lamp in the traffic scene image and a binarized image corresponding to the maximum connected region comprises:
acquiring a binary image of a target area of the target traffic signal lamp in the traffic scene image;
removing noise points in the binarized image of the target area;
and acquiring a maximum connected region and a binarized image corresponding to the maximum connected region from the binarized image of the target region.
4. The method of claim 1, wherein the step of using the color with the largest number of pixels in the largest connected region as the color of the target traffic signal comprises:
judging the color of each pixel point in the maximum communication area according to the following formula:
Figure FDA0003289976970000031
counting the number of pixel points corresponding to each color;
detecting whether the ratio of the maximum number of the pixel points to the total number of the pixel points in the maximum communication area is larger than a preset value or not;
if the ratio of the maximum number of the pixel points to the total number of the pixel points in the maximum communication area is larger than a preset value, taking the color of the pixel points with the maximum number as the color of the target traffic signal lamp;
wherein z isi(x,y)Representing the color of pixel point i with coordinates (x, y).
CN201811158223.5A 2018-09-30 2018-09-30 Traffic signal lamp identification method Active CN109389079B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811158223.5A CN109389079B (en) 2018-09-30 2018-09-30 Traffic signal lamp identification method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811158223.5A CN109389079B (en) 2018-09-30 2018-09-30 Traffic signal lamp identification method

Publications (2)

Publication Number Publication Date
CN109389079A CN109389079A (en) 2019-02-26
CN109389079B true CN109389079B (en) 2022-02-15

Family

ID=65419105

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811158223.5A Active CN109389079B (en) 2018-09-30 2018-09-30 Traffic signal lamp identification method

Country Status (1)

Country Link
CN (1) CN109389079B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112380973B (en) * 2020-11-12 2023-06-23 深兰科技(上海)有限公司 Traffic signal lamp identification method and system

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176287A (en) * 2011-02-28 2011-09-07 无锡中星微电子有限公司 Traffic signal lamp identifying system and method
CN105913041A (en) * 2016-04-27 2016-08-31 浙江工业大学 Pre-marked signal lights based identification method
CN106570494A (en) * 2016-11-21 2017-04-19 北京智芯原动科技有限公司 Traffic signal lamp recognition method and device based on convolution neural network
CN106650641A (en) * 2016-12-05 2017-05-10 北京文安智能技术股份有限公司 Traffic light positioning and identification method, device and system
CN106781521A (en) * 2016-12-30 2017-05-31 东软集团股份有限公司 The recognition methods of traffic lights and device
CN106909937A (en) * 2017-02-09 2017-06-30 北京汽车集团有限公司 Traffic lights recognition methods, control method for vehicle, device and vehicle
CN107038420A (en) * 2017-04-14 2017-08-11 北京航空航天大学 A kind of traffic lights recognizer based on convolutional network
CN107527511A (en) * 2016-06-22 2017-12-29 杭州海康威视数字技术股份有限公司 A kind of intelligent vehicle driving based reminding method and device
CN107704853A (en) * 2017-11-24 2018-02-16 重庆邮电大学 A kind of recognition methods of the traffic lights based on multi-categorizer
CN108108761A (en) * 2017-12-21 2018-06-01 西北工业大学 A kind of rapid transit signal lamp detection method based on depth characteristic study

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102176287A (en) * 2011-02-28 2011-09-07 无锡中星微电子有限公司 Traffic signal lamp identifying system and method
CN105913041A (en) * 2016-04-27 2016-08-31 浙江工业大学 Pre-marked signal lights based identification method
CN107527511A (en) * 2016-06-22 2017-12-29 杭州海康威视数字技术股份有限公司 A kind of intelligent vehicle driving based reminding method and device
CN106570494A (en) * 2016-11-21 2017-04-19 北京智芯原动科技有限公司 Traffic signal lamp recognition method and device based on convolution neural network
CN106650641A (en) * 2016-12-05 2017-05-10 北京文安智能技术股份有限公司 Traffic light positioning and identification method, device and system
CN106781521A (en) * 2016-12-30 2017-05-31 东软集团股份有限公司 The recognition methods of traffic lights and device
CN106909937A (en) * 2017-02-09 2017-06-30 北京汽车集团有限公司 Traffic lights recognition methods, control method for vehicle, device and vehicle
CN107038420A (en) * 2017-04-14 2017-08-11 北京航空航天大学 A kind of traffic lights recognizer based on convolutional network
CN107704853A (en) * 2017-11-24 2018-02-16 重庆邮电大学 A kind of recognition methods of the traffic lights based on multi-categorizer
CN108108761A (en) * 2017-12-21 2018-06-01 西北工业大学 A kind of rapid transit signal lamp detection method based on depth characteristic study

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
基于图像处理的交通信号灯识别方法;武莹等;《交通信息与安全》;20111231;第29卷(第3期);第51-54页 *
基于图像的交通信号灯识别;王涛;《万方数据知识服务平台》;20160831;第3-4章 *
基于深度学习的交通信号灯检测及分类方法;王莹等;《汽车实用技术》;20180915(第17期);第89-91页 *
智能车交通灯识别;李广亮等;《杭州电子科技大学学报》;20140531;第34卷(第3期);第123-126页 *
缩微交通环境下交通灯识别方法研究;刘忆萱;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20160815(第8期);C034-313 *

Also Published As

Publication number Publication date
CN109389079A (en) 2019-02-26

Similar Documents

Publication Publication Date Title
CN103077384B (en) A kind of method and system of vehicle-logo location identification
CN101408942B (en) Method for locating license plate under a complicated background
CN102693423B (en) One pinpoint method of car plate under intense light conditions
CN103383733B (en) A kind of track based on half machine learning video detecting method
CN105488454A (en) Monocular vision based front vehicle detection and ranging method
CN109726717B (en) Vehicle comprehensive information detection system
CN111027511B (en) Remote sensing image ship detection method based on region of interest block extraction
CN107492094A (en) A kind of unmanned plane visible detection method of high voltage line insulator
CN107016362B (en) Vehicle weight recognition method and system based on vehicle front windshield pasted mark
CN109215364B (en) Traffic signal recognition method, system, device and storage medium
CN104050447A (en) Traffic light identification method and device
CN108256467B (en) Traffic sign detection method based on visual attention mechanism and geometric features
CN104182754A (en) Rural resident point information extraction method based on high-resolution remote-sensing image
CN109087363B (en) HSV color space-based sewage discharge detection method
CN103729863B (en) Traffic lights automatic location based on autonomous learning is known method for distinguishing
CN110688907A (en) Method and device for identifying object based on road light source at night
CN105809149A (en) Lane line detection method based on straight lines with maximum length
CN107563301A (en) Red signal detection method based on image processing techniques
US10726277B2 (en) Lane line detection method
CN105117726A (en) License plate positioning method based on multi-feature area accumulation
CN103093200A (en) Algorithm for quickly and accurately locating plate number of image
CN108648210B (en) Rapid multi-target detection method and device under static complex scene
CN109389079B (en) Traffic signal lamp identification method
CN108416284A (en) A kind of dividing method of traffic lights
CN113989771A (en) Traffic signal lamp identification method based on digital image processing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant