CN106570494A - Traffic signal lamp recognition method and device based on convolution neural network - Google Patents

Traffic signal lamp recognition method and device based on convolution neural network Download PDF

Info

Publication number
CN106570494A
CN106570494A CN201611021336.1A CN201611021336A CN106570494A CN 106570494 A CN106570494 A CN 106570494A CN 201611021336 A CN201611021336 A CN 201611021336A CN 106570494 A CN106570494 A CN 106570494A
Authority
CN
China
Prior art keywords
image
region
shape
signal lighties
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201611021336.1A
Other languages
Chinese (zh)
Inventor
谢静
崔凯
李党
班华忠
李志国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhi Xinyuandong Science And Technology Ltd
Original Assignee
Beijing Zhi Xinyuandong Science And Technology Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhi Xinyuandong Science And Technology Ltd filed Critical Beijing Zhi Xinyuandong Science And Technology Ltd
Priority to CN201611021336.1A priority Critical patent/CN106570494A/en
Publication of CN106570494A publication Critical patent/CN106570494A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/584Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of vehicle lights or traffic lights
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Abstract

The invention provides a traffic signal lamp recognition method based on a convolution neural network. The method comprises the steps that a sample image in the shape of a tag signal lamp is selected; the convoluted neural network is trained; a trained signal lamp shape recognition model is output; a vehicle-mounted camera is used to collect the color image of a scene; a to-be-divided area of the color image is selected, and black division processing is carried out to extract a black area; the image of a corresponding black area in the original image is extracted and is used as a light plate image; red, green, yellow and blue areas are divided from the light plate image, and images corresponding to red, green, yellow and blue areas in the original image are extracted and are used as red, green, yellow and blue lamp images; the trained signal lamp shape recognition model is used to carry out signal lamp shape recognition on red, green, yellow and blue lamp images; and color and shape recognition results of signal lamps are output. Compared with the prior art, the traffic signal lamp recognition method provided by the invention can quickly recognize traffic signal lamps of four colors and ten shapes.

Description

Traffic light recognition methodss and device based on convolutional neural networks
Technical field
The present invention relates to image procossing, video monitoring and security protection, more particularly to traffic light recognition methodss and device.
Background technology
As DAS (Driver Assistant System) and the important component part of Unmanned Systems, the research of traffic light technology of identification Driver not only can be helped to obtain the relevant information about traffic light, it is also possible to accelerate the developing steps of intelligent transportation. Additionally, the identification to traffic signal light condition, will make in the world 7%~8% achromatopsia, color weakness patient driving becomes can Can, also so that pilotless automobile technically takes a step forward.
Existing signal lighties recognizer mainly make use of the color and shape facility of signal lighties.Knowledge based on color character Other algorithm is to choose certain color space and be described signal lamp color and choose suitable threshold value to carry out splitting, in advance;Base In the recognizer of shape facility be to waiting using the shape facilities such as signal lighties circle, arrow-shaped and its appendicular shape information Favored area is extracted.However, in the case of background environment complexity, above-mentioned recognition methodss accuracy rate is not high.
The Chinese invention patent application of Publication No. CN104050447A discloses a kind of traffic light recognition methodss, should Method includes:Enter row threshold division in three channel values of rgb space, obtain black region bianry image;According to the spy of lamp plate Levy, lamp plate region is oriented in black region bianry image;Lamp plate region is transformed into into YCbCr space from rgb space, is split Go out redness, yellow and green area;According to the feature of signal lighties, candidate region is determined in red, yellow and green area;Will The image of the candidate region of original image carries out the conversion of gray processing, normalization and Gabor wavelet, extracts characteristic vector;Finally lead to The similarity of comparative feature vector sum training sample is crossed, the classification of signal lighties is determined.But said method is by the Gabor of extraction Characteristic vector is identified, and recognition accuracy is relatively low.
In sum, at present in the urgent need to propose a kind of quick and high traffic light recognition methodss of recognition accuracy and Device.
The content of the invention
In view of this, present invention is primarily targeted at realizing the quick identification of traffic light, and recognition accuracy is high.
To reach above-mentioned purpose, according to the first aspect of the invention, there is provided the traffic based on convolutional neural networks is believed Signal lamp recognition methodss, the method include:
First step, chooses the sample image of label signal lamp shape, convolutional neural networks is trained, output training Good signal lighties shape recognition model;
Second step, gathers the coloured image of scene by in-vehicle camera;
Third step, chooses the region to be split of coloured image, carries out black dividing processing, extracts black region;
Four steps, extracts the image of correspondence black region in original image as lamp plate image;
5th step, is partitioned into redness, green, yellow and blue region from lamp plate image, and extracts original graph respectively As in, the image of correspondence redness, green, yellow and blue region is used as red light, green light, amber light and blue lamp image;
6th step, is carried out to red light, green light, amber light and blue lamp image using the signal lighties shape recognition model for training Signal lighties shape recognition;And
7th step, the CF recognition result of output signal light.
Further, the first step includes:
Sample selecting step, chooses the coloured image of LINum label signal lamp shape as sample image;
Initial training step, is trained to sample image using convolutional neural networks, obtains initial training model;
Second training step, chooses TINum test image, test image is instructed repeatedly according to initial training model Practice, until model is restrained;
Model exports step, using the model of convergence is as signal lighties shape recognition model and exports.
The label image of unlike signal lamp shape is chosen in the sample selecting step according to certain ratio, for The coloured image of LINum label signal lamp shape, wherein circular signal lighties amount of images is RC1* the letter of LINum, horizontal stripe Signal lamp amount of images is RC2* LINum, the signal lighties amount of images of vertical bar are RC3* LINum, to the signal lighties picture number of upward arrow Measure as RC4* LINum, to the left arrow signal lighties amount of images be RC5* LINum, the signal lighties amount of images of right-hand arrow are RC6* LINum, the signal lighties amount of images of fork-shaped are RC7* LINum, the signal lighties amount of images of pedestrian are RC8* LINum, voluntarily The signal lighties amount of images of car is RC9* LINum, the signal lighties amount of images of other shapes are RC10* LINum, andT1≤RCi< 0.1,0.1≤RCj≤T2, i ∈ { 1,2,3 }, j ∈ { 4,5 ..., 10 }.
Further, in the initial training step, convolutional neural networks include:
Input layer, input width are Width, the coloured image highly for Height;
Ground floor convolutional layer, exports Th_CK1 convolution kernel, and it is 1 that the size of convolution kernel is CKSi1*CKSi1, step-length;
Ground floor pond layer, adopts maximum pond method to export the core that KSi*KSi, step-length are for 2;
Second layer convolutional layer, exports Th_CK2 convolution kernel, and it is 1 that the size of convolution kernel is CKSi2*CKSi2, step-length;
Second layer pond layer, adopts maximum pond method to export the core that KSi*KSi, step-length are for 2;
Third layer convolutional layer, exports Th_CK3 convolution kernel, and it is 1 that the size of convolution kernel is CKSi3*CKSi3, step-length;
The full articulamentum of ground floor, using ReLU as activation primitive, exports Th_Neur neuron;
The full articulamentum of the second layer, exports 10 neurons, i.e., 10 signal lighties shape classifications.
Further, the second training step includes:
Training characteristics extraction step, according to initial training model, extracts the feature of TINum test image;
Training classification determination step, calculates similarity Simi of this feature and each signal lighties shape category featurek, k tables Show k-th classification, k ∈ { 1,2 ..., 10 } choose SimikThe maximum signal lighties shape classification of value is used as candidate signal lamp shape Classification;
Repetition training step, calculates the error of result of determination and legitimate reading, using back-propagation algorithm come training pattern, Repetition training characteristic extraction step and training classification determination step, until the model is restrained.
Further, the third step includes:
Region selecting step to be split, for the coloured image that height is H, the vertical coordinate of selected pixels point is located at [0, λ1* H] in color image region be region to be split;
Black pixel point extraction step, count respectively pixel (x, y) in region to be split red color component value R (x, y), Green component values G (x, y), blue color component value B (x, y), if the value of above three component is both less than segmentation threshold T3, then by pixel Point (x, y) is labeled as black pixel point, and pixel (x, y) is labeled as non-black pixel otherwise, obtains region to be split Bianry image;
Connected region process step, using black pixel point as foreground point, non-black pixel as background dot, using even Logical zone marker method is treated the bianry image of cut zone and carries out connected component labeling, obtains a series of connected region of labellings;
Connected region screens step, counts the width W of each connected region respectivelyCRWith height HCR, calculate connected region The ratio of width to height RWHWith the area S of the boundary rectangle of connected regionWH,SWH=WCR*HCRIf connected region meets simultaneously T4≤RWH≤T5And SWH≥T6, then retain the connected region, otherwise filter the connected region;
Black region obtaining step, counts the coordinate position of the connected region boundary rectangle for retaining, searches from region to be split The rectangular area of same coordinate position is sought, is exported the rectangular area as black region.
Further, the 5th step includes:
Color segmentation step, counts red color component value R (x, y) of pixel (x, y), green component in lamp plate image respectively The pixel if R (x, y) > G (x, y) and R (x, y) > B (x, y), is then designated as red by value G (x, y), blue color component value B (x, y) The pixel if G (x, y) > R (x, y) and G (x, y) > B (x, y), is then designated as green pixel point by colour vegetarian refreshments, ifAndThe pixel is designated as into yellow picture then Vegetarian refreshments, is otherwise labeled as other pixels;
Color region obtaining step, respectively using red, green, yellow and blue pixel point as foreground point, other pixels Point to the bianry image of lamp plate image carries out connected component labeling using connected component labeling method as background dot, obtains one and is The red, green of row labelling, yellow and blue region;
Color lamp image acquisition step, extracts the figure of correspondence redness, green, yellow and blue region in original image respectively As red light, green light, amber light and blue lamp image.
Further, the 6th step includes:
Identification feature extraction step, extracts red light, green light, amber light and blue lamp figure respectively using signal lighties shape recognition model The feature of picture;
Identification classification determination step, calculates the similarity of identification feature and each the signal lighties shape category feature for extracting Simik, k represents k-th classification, and k ∈ { 1,2 ..., 10 } choose SimikThe maximum classification of value is used as signal lighties shape classification.
According to another aspect of the present invention, there is provided the traffic signal light identifier based on convolutional neural networks, should Device includes:
Signal lighties shape recognition model training module, for choosing the sample image of label signal lamp shape, to convolution god Jing networks are trained, and export the signal lighties shape recognition model for training;
Scene image acquisition module, for the coloured image of scene is gathered by in-vehicle camera;
Black region extraction module, for choosing the region to be split of coloured image, carries out black dividing processing, extracts black Zone domain;
Lamp plate image zooming-out module, for extracting the image of correspondence black region in original image as lamp plate image;
Color lamp image zooming-out module, for redness, green, yellow and blue region are partitioned into from lamp plate image, and The image of correspondence redness, green, yellow and blue region in original image is extracted respectively as red light, green light, amber light and blue lamp Image;
Color lamp picture shape identification module, for adopt the signal lighties shape recognition model that trains to red light, green light, Amber light and blue lamp image carry out signal lighties shape recognition;And
Signal lamp color and shape output module, for the CF recognition result of output signal light.
The signal lighties shape recognition model training module includes:
Sample chooses module, for choosing the coloured image of LINum label signal lamp shape as sample image;
Initial training module, for being trained to sample image using convolutional neural networks, obtains initial training model;
Second training module, for choosing TINum test image, is carried out instead to test image according to initial training model Refreshment is practiced, until model is restrained;
Model output module, for using the model of convergence is as signal lighties shape recognition model and exports.
Further, the sample chooses the label figure of unlike signal lamp shape according to certain ratio in choosing module Picture.For the coloured image of LINum label signal lamp shape, wherein circular signal lighties amount of images is RC1* LINum, horizontal stroke The signal lighties amount of images of bar is RC2* LINum, the signal lighties amount of images of vertical bar are RC3* LINum, to the signal lighties of upward arrow Amount of images is RC4* LINum, to the left arrow signal lighties amount of images be RC5* the signal lighties picture number of LINum, right-hand arrow Measure as RC6* LINum, the signal lighties amount of images of fork-shaped are RC7* LINum, the signal lighties amount of images of pedestrian are RC8*LINum、 The signal lighties amount of images of bicycle is RC9* LINum, the signal lighties amount of images of other shapes are RC10* LINum, andT1≤RCi< 0.1,0.1≤RCj≤T2, i ∈ { 1,2,3 }, j ∈ { 4,5 ..., 10 }.
Further, in the initial training module, convolutional neural networks include:
Input layer, input width are Width, the coloured image highly for Height;
Ground floor convolutional layer, exports Th_CK1 convolution kernel, and it is 1 that the size of convolution kernel is CKSi1*CKSi1, step-length;
Ground floor pond layer, adopts maximum pond method to export the core that KSi*KSi, step-length are for 2;
Second layer convolutional layer, exports Th_CK2 convolution kernel, and it is 1 that the size of convolution kernel is CKSi2*CKSi2, step-length;
Second layer pond layer, adopts maximum pond method to export the core that KSi*KSi, step-length are for 2;
Third layer convolutional layer, exports Th_CK3 convolution kernel, and it is 1 that the size of convolution kernel is CKSi3*CKSi3, step-length;
The full articulamentum of ground floor, using ReLU as activation primitive, exports Th_Neur neuron;
The full articulamentum of the second layer, exports 10 neurons, i.e., 10 signal lighties shape classifications.
Further, the second training module includes:
Training characteristics extraction module, for according to initial training model, extracting the feature of TINum test image;
Training classification determination module, for calculating similarity Simi of this feature and each signal lighties shape category featurek, K represents k-th classification, and k ∈ { 1,2 ..., 10 } choose SimikThe maximum signal lighties shape classification of value is used as candidate signal lamp shape Shape classification;
Repetition training module, for calculating the error of result of determination and legitimate reading, is trained using back-propagation algorithm Model, repetition training characteristic extracting module and training classification determination module, until the model is restrained.
Further, the black region extraction module includes:
Module is chosen in region to be split, for for the coloured image that height is H, the vertical coordinate of selected pixels point is located at [0,λ1* H] in color image region be region to be split;
Black pixel point extraction module, for counting red color component value R of pixel (x, y) in region to be split respectively (x, y), green component values G (x, y), blue color component value B (x, y), if the value of above three component is both less than segmentation threshold T3, then Pixel (x, y) is labeled as into black pixel point, pixel (x, y) is labeled as into non-black pixel otherwise, obtained to be split The bianry image in region;
Connected region processing module, for used as foreground point, non-black pixel is adopted as background dot using black pixel point The bianry image that cut zone is treated with connected component labeling method carries out connected component labeling, obtains a series of connected region of labellings Domain;
Connected region screening module, for counting the width W of each connected region respectivelyCRWith height HCR, calculate connected region The ratio of width to height R in domainWHWith the area S of the boundary rectangle of connected regionWH,SWH=WCR*HCRIf connected region is simultaneously Meet T4≤RWH≤T5And SWH≥T6, then retain the connected region, otherwise filter the connected region;
Black region acquisition module, for counting the coordinate position of the connected region boundary rectangle for retaining, from area to be split Domain searches the rectangular area of same coordinate position, exports the rectangular area as black region.
Further, the color lamp image zooming-out module includes:
Color segmentation module, for counting red color component value R (x, y) of pixel (x, y) in lamp plate image, green respectively Component value G (x, y), blue color component value B (x, y), if R (x, y) > G (x, y) and R (x, y) > B (x, y), then be designated as the pixel The pixel if G (x, y) > R (x, y) and G (x, y) > B (x, y), is then designated as green pixel point, if B (x, y) by red pixel point > R (x, y) and B (x, y) > G (x, y), then be designated as blue pixel point by the pixel, if AndThe pixel is designated as into yellow pixel point then, other pixels are otherwise labeled as;
Color region acquisition module, for respectively using red, green, yellow and blue pixel point as foreground point, other Pixel carries out connected component labeling to the bianry image of lamp plate image using connected component labeling method as background dot, obtains A series of the red, green of labellings, yellow and blue region;
Color lamp image collection module, for extracting red correspondence in original image, green, yellow and blue region respectively Image as red light, green light, amber light and blue lamp image.
Further, the color lamp picture shape identification module includes:
Identification feature extraction module, for extracting red light, green light, amber light and indigo plant respectively using signal lighties shape recognition model The feature of lamp image;
Identification classification determination module, for calculating the phase of the identification feature extracted and each signal lighties shape category feature Like degree Simik, k represents k-th classification, and k ∈ { 1,2 ..., 10 } choose SimikThe maximum classification of value is used as signal lighties shape class Not.
Compared with existing traffic light technology of identification, one aspect of the present invention be by delimiting region to be split and lamp plate figure The extraction of picture, reduces operand;On the other hand, signal lighties shape recognition model is obtained by convolutional neural networks training, is carried The high accuracy rate of signal lighties shape recognition.
Description of the drawings
The flow chart that Fig. 1 shows the traffic light recognition methodss based on convolutional neural networks according to the present invention.
Fig. 2 shows the frame diagram of the traffic signal light identifier based on convolutional neural networks according to the present invention.
Specific embodiment
To enable that your auditor further appreciates that structure, feature and the other purposes of the present invention, in conjunction with appended preferable reality Apply example describe in detail it is as follows, illustrated preferred embodiment is merely to illustrate technical scheme, and the non-limiting present invention.
The flow chart that Fig. 1 gives the traffic light recognition methodss based on convolutional neural networks according to the present invention.Such as Shown in Fig. 1, according to being included based on the traffic light recognition methodss of convolutional neural networks for the present invention:
First step S1, chooses the sample image of label signal lamp shape, convolutional neural networks is trained, output instruction The signal lighties shape recognition model perfected;
Second step S2, gathers the coloured image of scene by in-vehicle camera;
Third step S3, chooses the region to be split of coloured image, carries out black dividing processing, extracts black region;
Four steps S4, extracts the image of correspondence black region in original image as lamp plate image;
5th step S5, is partitioned into redness, green, yellow and blue region from lamp plate image, and extracts respectively original In image, the image of correspondence redness, green and yellow area is used as red light, green light, yellow and blue lamp image;
6th step S6, is entered to red light, green light, yellow and blue lamp image using the signal lighties shape recognition model for training Row signal lighties shape recognition;And
7th step S7, the CF recognition result of output signal light.
In the first step S1, the sample image of label signal lighties shape has circle, horizontal stripe, upwards vertical bar, arrow for label The image of head, to the left arrow, right-hand arrow, fork-shaped, pedestrian, bicycle and other shapes.
The method being trained to convolutional neural networks in the first step S1 can adopt existing convolutional Neural net Network training method is realized.
Further, the first step S1 includes:
Sample selecting step S11, chooses the coloured image of LINum label signal lamp shape as sample image;
Initial training step S12, is trained to sample image using convolutional neural networks, obtains initial training model;
Second training step S13, chooses TINum test image, test image is carried out instead according to initial training model Refreshment is practiced, until model is restrained;
Model exports step S14, using the model of convergence is as signal lighties shape recognition model and exports.
In the sample selecting step S11, signal lighties shape includes:Circle, horizontal stripe, vertical bar, to upward arrow, to the left arrow, Right-hand arrow, fork-shaped, pedestrian, bicycle and other shapes.
Further, the label of unlike signal lamp shape is chosen in the sample selecting step S11 according to certain ratio Image.For the coloured image of LINum label signal lamp shape, circular signal lighties amount of images is RC1* LINum, horizontal stripe Signal lighties amount of images be RC2* LINum, the signal lighties amount of images of vertical bar are RC3* LINum, to the signal lighties figure of upward arrow As quantity is RC4* LINum, to the left arrow signal lighties amount of images be RC5* the signal lighties amount of images of LINum, right-hand arrow For RC6* LINum, the signal lighties amount of images of fork-shaped are RC7* LINum, the signal lighties amount of images of pedestrian are RC8* LINum, from The signal lighties amount of images of driving is RC9* LINum, the signal lighties amount of images of other shapes are RC10* LINum, andT1≤RCi< 0.1,0.1≤RCj≤T2, i ∈ { 1,2,3 }, j ∈ { 4,5 ..., 10 }.
Further, LINum >=4000, T1∈ [0.05,0.08], T2∈[0.15,0.25].Preferably, LINum >=16000, T1Elect 0.06, T as2Elect 0.2 as.
Further, in initial training step S12, convolutional neural networks include:
Input layer, input width are Width, the coloured image highly for Height;
Ground floor convolutional layer, exports Th_CK1 convolution kernel, and it is 1 that the size of convolution kernel is CKSi1*CKSi1, step-length;
Ground floor pond layer, adopts maximum pond method to export the core that KSi*KSi, step-length are for 2;
Second layer convolutional layer, exports Th_CK2 convolution kernel, and it is 1 that the size of convolution kernel is CKSi2*CKSi2, step-length;
Second layer pond layer, adopts maximum pond method to export the core that KSi*KSi, step-length are for 2;
Third layer convolutional layer, exports Th_CK3 convolution kernel, and it is 1 that the size of convolution kernel is CKSi3*CKSi3, step-length;
The full articulamentum of ground floor, using ReLU as activation primitive, exports Th_Neur neuron;
The full articulamentum of the second layer, exports 10 neurons, i.e., 10 signal lighties shape classifications.
Wherein, Width ∈ [18,50], Height ∈ [18,50].Th_CK1∈[6,20].CKSi1 be 3 or 5 or 7.KSi∈[2,4].Th_CK2∈[10,40].CKSi2 is 3 or 5 or 7.Th_CK3∈[10,40].CKSi3 is 3 or 5 Or 7.Th_Neur∈[160,10000].
Preferably, Width is set to 30, Height and is set to 30.Th_CK1 is set to 16, CKSi1 and is set to 3, KSi be set to 2, Th_ CK2 is set to 32, CKSi2 and is set to 3, Th_CK3 and is set to 32, CKSi3 be set to 3.Th_Neur is set to 256.
10 signal lighties shape classifications in the full articulamentum of the second layer include:Circle, horizontal stripe, vertical bar, to upward arrow, Arrow, right-hand arrow, fork-shaped, pedestrian, bicycle and other shapes to the left.
In the full articulamentum of the ground floor, ReLU full name are Rectified Linear Units, and it is linear that Chinese is translated into amendment Unit, may be referred to document " Taming the ReLU with Parallel Dither in a Deep Neural Network.AJR Simpson.Computer Science,2015”。
Further, second training step S13 includes:
Training characteristics extraction step S131, according to initial training model, extracts the feature of TINum test image;
Training classification determination step S132, calculates similarity Simi of this feature and each signal lighties shape category featurek, K represents k-th classification, and k ∈ { 1,2 ..., 10 } choose SimikThe maximum signal lighties shape classification of value is used as candidate signal lamp shape Shape classification;
Repetition training step S133, is calculated the error of result of determination and legitimate reading, is trained using back-propagation algorithm Model, repetition training characteristic extraction step S131 and training classification determination step S132, until the model is restrained.
TINum >=1000.Preferably, TINum >=4000.Back-propagation algorithm is realized by existing technology.
In the second step S2, in-vehicle camera installs the position of the road image that can onboard collect traveling front Place.Such as in-vehicle camera is installed at rearview mirror in the car or roof or vehicle front position.
Further, the third step S3 includes:
Selecting step S31 in region to be split, for height is the coloured image of H, the vertical coordinate of selected pixels point be located at [0, λ1* H] in color image region be region to be split;
Black pixel point extraction step S32, count respectively pixel (x, y) in region to be split red color component value R (x, Y), green component values G (x, y), blue color component value B (x, y), if the value of above three component is both less than segmentation threshold T3, then will Pixel (x, y) is labeled as black pixel point, and pixel (x, y) is labeled as non-black pixel otherwise, obtains area to be split The bianry image in domain;
Connected region process step S33, using black pixel point as foreground point, non-black pixel is adopted as background dot The bianry image that cut zone is treated with connected component labeling method carries out connected component labeling, obtains a series of connected region of labellings Domain;
Connected region screens step S34, counts the width W of each connected region respectivelyCRWith height HCR, calculate connected region The ratio of width to height R in domainWHWith the area S of the boundary rectangle of connected regionWH,SWH=WCR*HCRIf connected region is simultaneously Meet T4≤RWH≤T5And SWH≥T6, then retain the connected region, otherwise filter the connected region;
Black region obtaining step S35, counts the coordinate position of the connected region boundary rectangle for retaining, from region to be split The rectangular area of same coordinate position is searched, is exported the rectangular area as black region.
λ in the selecting step S31 in region to be split1∈[0.3,0.8].Preferably, λ1Elect 0.5 or 0.6 as.
In the black pixel point extraction step S32, the value of three components is both less than segmentation threshold T3As meet simultaneously: Red color component value R (x, y) < T3, green component values G (x, y) < T3, blue color component value B (x, y) < T3
The segmentation threshold T3∈[55,65].Preferably, segmentation threshold T3Elect 60 as.
In connected region process step S33, connected component labeling method is by existing connected component labeling algorithm reality It is existing.For example, " Feng Haiwen, Niu Lianqiang, Liu Xiaoming. efficient scan-type connected component labeling algorithm.《Computer engineering with Using》,2014,50(23):31-35”.
T in connected region screening step S344∈ [0.2,0.3], T5∈ [4,5], T6∈[15,25].Preferably, T4 Elect 0.26, T as5Elect 4.5, T as6Elect 20 as.
Further, the 5th step S5 includes:
Color segmentation step S51, counts red color component value R (x, y) of pixel (x, y) in lamp plate image, green point respectively The pixel if R (x, y) > G (x, y) and R (x, y) > B (x, y), is then designated as red by value G (x, y), blue color component value B (x, y) The pixel if G (x, y) > R (x, y) and G (x, y) > B (x, y), is then designated as green pixel point, if B (x, y) > by colour vegetarian refreshments R (x, y) and B (x, y) > G (x, y), then be designated as blue pixel point by the pixel, if AndThe pixel is designated as into yellow pixel point then, other pixels are otherwise labeled as;
Color region obtaining step S52, respectively using red, green, yellow and blue pixel point as foreground point, other pictures Vegetarian refreshments to the bianry image of lamp plate image carries out connected component labeling using connected component labeling method as background dot, obtains one The red, green of series of markings, yellow and blue region;
Color lamp image acquisition step S53, extracts correspondence redness, green, yellow and blue region in original image respectively Image as red light, green light, amber light and blue lamp image.
The T7∈[1.1,1.3].Preferably, T7Elect 1.2 as.
Further, the 6th step S6 includes:
Identification feature extraction step 61, extracts red light, green light, amber light and blue lamp respectively using signal lighties shape recognition model The feature of image;
Identification classification determination step 62, calculates the identification feature extracted similar to each signal lighties shape category feature Degree Simik, k represents k-th classification, and k ∈ { 1,2 ..., 10 } choose SimikThe maximum classification of value is used as signal lighties shape classification.
Further, the 7th step S7 output red light, green light, amber light and blue lamp image and signal lighties shape recognition As a result.
Fig. 2 gives the frame diagram of the traffic signal light identifier based on convolutional neural networks according to the present invention.Such as Shown in Fig. 2, according to being included based on the traffic signal light identifier of convolutional neural networks for the present invention:
Signal lighties shape recognition model training module 1, for choosing the sample image of label signal lamp shape, to convolution god Jing networks are trained, and export the signal lighties shape recognition model for training;
Scene image acquisition module 2, for the coloured image of scene is gathered by in-vehicle camera;
Black region extraction module 3, for choosing the region to be split of coloured image, carries out black dividing processing, extracts Black region;
Lamp plate image zooming-out module 4, for extracting the image of correspondence black region in original image as lamp plate image;
Color lamp image zooming-out module 5, for redness, green, yellow and blue region are partitioned into from lamp plate image, and The image of correspondence redness, green, yellow and blue region in original image is extracted respectively as red light, green light, amber light and blue lamp Image;
Color lamp picture shape identification module 6, for adopting the signal lighties shape recognition model that trains to red light, green Lamp, amber light and blue lamp image carry out signal lighties shape recognition;And
Signal lamp color and shape output module 7, for the CF recognition result of output signal light.
In the signal lighties shape recognition model training module 1, the sample image of label signal lighties shape has circle for label Shape, horizontal stripe, vertical bar, to upward arrow, the image of arrow, right-hand arrow, fork-shaped, pedestrian, bicycle and other shapes to the left.
The method being trained to convolutional neural networks in the signal lighties shape recognition model training module 1 can be adopted Existing convolutional neural networks training method is realized.
Further, the signal lighties shape recognition model training module 1 includes:
Sample chooses module 11, for choosing the coloured image of LINum label signal lamp shape as sample image;
Initial training module 12, for being trained to sample image using convolutional neural networks, obtains initial training mould Type;
Second training module 13, for choosing TINum test image, is carried out to test image according to initial training model Repetition training, until model is restrained;
Model output module 14, for using the model of convergence is as signal lighties shape recognition model and exports.
During the sample chooses module 11, signal lighties shape includes:Circle, horizontal stripe, vertical bar, to upward arrow, to the left arrow, Right-hand arrow, fork-shaped, pedestrian, bicycle and other shapes.
Further, the sample chooses the label figure of unlike signal lamp shape according to certain ratio in choosing module 11 Picture.For the coloured image of LINum label signal lamp shape, circular signal lighties amount of images is RC1* LINum, horizontal stripe Signal lighties amount of images is RC2* LINum, the signal lighties amount of images of vertical bar are RC3* LINum, to the signal lighties image of upward arrow Quantity is RC4* LINum, to the left arrow signal lighties amount of images be RC5* LINum, the signal lighties amount of images of right-hand arrow are RC6* LINum, the signal lighties amount of images of fork-shaped are RC7* LINum, the signal lighties amount of images of pedestrian are RC8* LINum, voluntarily The signal lighties amount of images of car is RC9* LINum, the signal lighties amount of images of other shapes are RC10* LINum, andT1≤RCi< 0.1,0.1≤RCj≤T2, i ∈ { 1,2,3 }, j ∈ { 4,5 ..., 10 }.
Further, LINum >=4000, T1∈ [0.05,0.08], T2∈[0.15,0.25].Preferably, LINum >=16000, T1Elect 0.06, T as2Elect 0.2 as..
Further, in the initial training module 12, convolutional neural networks include:
Input layer, input width are Width, the coloured image highly for Height;
Ground floor convolutional layer, exports Th_CK1 convolution kernel, and it is 1 that the size of convolution kernel is CKSi1*CKSi1, step-length;
Ground floor pond layer, adopts maximum pond method to export the core that KSi*KSi, step-length are for 2;
Second layer convolutional layer, exports Th_CK2 convolution kernel, and it is 1 that the size of convolution kernel is CKSi2*CKSi2, step-length;
Second layer pond layer, adopts maximum pond method to export the core that KSi*KSi, step-length are for 2;
Third layer convolutional layer, exports Th_CK3 convolution kernel, and it is 1 that the size of convolution kernel is CKSi3*CKSi3, step-length;
The full articulamentum of ground floor, using ReLU as activation primitive, exports Th_Neur neuron;
The full articulamentum of the second layer, exports 10 neurons, i.e., 10 signal lighties shape classifications.
Wherein, Width ∈ [18,50], Height ∈ [18,50].Th_CK1∈[6,20].CKSi1 be 3 or 5 or 7.KSi∈[2,4].Th_CK2∈[10,40].CKSi2 is 3 or 5 or 7.Th_CK3∈[10,40].CKSi3 is 3 or 5 Or 7.Th_Neur∈[160,10000].
Preferably, Width is set to 30, Height and is set to 30.Th_CK1 is set to 16, CKSi1 and is set to 3, KSi be set to 2, Th_ CK2 is set to 32, CKSi2 and is set to 3, Th_CK3 and is set to 32, CKSi3 be set to 3.Th_Neur is set to 256.
10 signal lighties shape classifications in the full articulamentum of the second layer include:Circle, horizontal stripe, vertical bar, to upward arrow, Arrow, right-hand arrow, fork-shaped, pedestrian, bicycle and other shapes to the left.
In the full articulamentum of the ground floor, ReLU full name are Rectified Linear Units, and it is linear that Chinese is translated into amendment Unit, may be referred to document " Taming the ReLU with Parallel Dither in a Deep Neural Network.AJR Simpson.Computer Science,2015”。
Further, the second training module 13 includes:
Training characteristics extraction module 131, for according to initial training model, extracting the feature of TINum test image;
Training classification determination module 132, for calculating the similarity of this feature and each signal lighties shape category feature Simik, k represents k-th classification, and k ∈ { 1,2 ..., 10 } choose SimikThe maximum signal lighties shape classification of value is believed as candidate Signal lamp shape classification;
Repetition training module 133, for calculating the error of result of determination and legitimate reading, is instructed using back-propagation algorithm Practice model, repetition training characteristic extracting module 131 and training classification determination module 132, until the model is restrained.
TINum >=1000.Preferably, TINum >=4000.Back-propagation algorithm is realized by existing technology.
In the scene image acquisition module 2, in-vehicle camera is arranged on the car of the road image that can collect traveling front On a certain position, such as at in-car rearview mirror or roof.
Further, the black region extraction module 3 includes:
Module 31 is chosen in region to be split, for for the coloured image that height is H, the vertical coordinate of selected pixels point is located at [0,λ1* H] in color image region be region to be split;
Black pixel point extraction module 32, for counting red color component value R of pixel (x, y) in region to be split respectively (x, y), green component values G (x, y), blue color component value B (x, y), if the value of above three component is both less than segmentation threshold T3, then Pixel (x, y) is labeled as into black pixel point, pixel (x, y) is labeled as into non-black pixel otherwise, obtained to be split The bianry image in region;
Connected region processing module 33, for using black pixel point as foreground point, non-black pixel as background dot, The bianry image that cut zone is treated using connected component labeling method carries out connected component labeling, obtains a series of connection of labellings Region;
Connected region screening module 34, for counting the width W of each connected region respectivelyCRWith height HCR, calculate connection The ratio of width to height R in regionWHWith the area S of the boundary rectangle of connected regionWH,SWH=WCR*HCRIf connected region is same When meet T4≤RWH≤T5And SWH≥T6, then retain the connected region, otherwise filter the connected region;
Black region acquisition module 35, for counting the coordinate position of the connected region boundary rectangle for retaining, to be split Rectangular area of the local range search to same coordinate position, exports the rectangular area as black region.
Choose λ in module 31 in the region to be split1∈[0.3,0.8].Preferably, λ1Elect 0.5 or 0.6 as.
In the black pixel point extraction module 32, the value of three components is both less than segmentation threshold T3As meet simultaneously:It is red Colouring component value R (x, y) < T3, green component values G (x, y) < T3, blue color component value B (x, y) < T3
The segmentation threshold T3∈[55,65].Preferably, segmentation threshold T3Elect 60 as.
In the connected region processing module 33, connected component labeling method is realized by existing connected component labeling algorithm. For example, " Feng Haiwen, Niu Lianqiang, Liu Xiaoming. efficient scan-type connected component labeling algorithm.《Computer engineering with should With》,2014,50(23):31-35”.
T in the connected region screening module 344∈ [0.2,0.3], T5∈ [4,5], T6∈[15,25].Preferably, T4 Elect 0.26, T as5Elect 4.5, T as6Elect 20 as.
Further, the color lamp image zooming-out module 5 includes:
Color segmentation module 51, for red color component value R (x, y), green for counting pixel (x, y) in lamp plate image respectively Colouring component value G (x, y), blue color component value B (x, y), if R (x, y) > G (x, y) and R (x, y) > B (x, y), then by the pixel Red pixel point is designated as, if G (x, y) > R (x, y) and G (x, y) > B (x, y), then the pixel green pixel point is designated as into, if B (x, y) > R (x, y) and B (x, y) > G (x, y), then be designated as blue pixel point by the pixel, ifAndThe pixel is designated as into yellow picture then Vegetarian refreshments, is otherwise labeled as other pixels;
Color region acquisition module 52, for respectively using red, green, yellow and blue pixel point as foreground point, its His pixel carries out connected component labeling to the bianry image of lamp plate image using connected component labeling method, obtains as background dot Take a series of the red, green of labellings, yellow and blue region;
Color lamp image collection module 53, for extracting red correspondence in original image, green, yellow and blue region respectively The image in domain is used as red light, green light, amber light and blue lamp image.
The T7∈[1.1,1.3].Preferably, T7Elect 1.2 as.
Further, the color lamp picture shape identification module 6 includes:
Identification feature extraction module 61, for using signal lighties shape recognition model extract respectively red light, green light, amber light and The feature of blue lamp image;
Identification classification determination module 62, for calculating the identification feature extracted with each signal lighties shape category feature Similarity Simik, k represents k-th classification, and k ∈ { 1,2 ..., 10 } choose SimikThe maximum classification of value is used as signal lighties shape Classification.
Compared with existing traffic light technology of identification, one aspect of the present invention be by delimiting region to be split and lamp plate figure The extraction of picture, reduces operand;On the other hand, signal lighties shape recognition model is obtained by convolutional neural networks training, is carried The high accuracy rate of signal lighties shape recognition.
The above, only presently preferred embodiments of the present invention is not intended to limit protection scope of the present invention, should Understand, the present invention is not limited to implementation as described herein, the purpose of these implementation descriptions is to help this area In technical staff practice the present invention.Any those of skill in the art are easy to without departing from spirit and scope of the invention In the case of be further improved and perfect, therefore the present invention is only subject to the content of the claims in the present invention and limiting for scope System, its intention cover all alternatives being included in the spirit and scope of the invention being defined by the appended claims and wait Same scheme.

Claims (16)

1. the traffic signal recognition methodss based on convolutional neural networks, it is characterised in that the method includes:
First step, chooses the sample image of label signal lamp shape, convolutional neural networks is trained, what output was trained Signal lighties shape recognition model;
Second step, gathers the coloured image of scene by in-vehicle camera;
Third step, chooses the region to be split of coloured image, carries out black dividing processing, extracts black region;
Four steps, extracts the image of correspondence black region in original image as lamp plate image;
5th step, is partitioned into redness, green, yellow and blue region from lamp plate image, and is extracted in original image respectively The image of correspondence redness, green, yellow and blue region is used as red light, green light, amber light and blue lamp image;
6th step, carries out signal to red light, green light, amber light and blue lamp image using the signal lighties shape recognition model for training Lamp shape recognition;And
7th step, the CF recognition result of output signal light;
Wherein, the in-vehicle camera is installed at the position of the road image that can onboard collect traveling front.
2. the method for claim 1, it is characterised in that the first step includes:
Sample selecting step, chooses the coloured image of LINum label signal lamp shape as sample image;
Initial training step, is trained to sample image using convolutional neural networks, obtains initial training model;Second training Step, chooses TINum test image, carries out repetition training to test image according to initial training model, until model is restrained;
Model exports step, using the model of convergence is as signal lighties shape recognition model and exports.
3. method as claimed in claim 2, it is characterised in that choose not according to certain ratio in the sample selecting step With the label image of signal lighties shape, for the coloured image of LINum label signal lamp shape, wherein circular signal lighties figure As quantity is RC1* LINum, the signal lighties amount of images of horizontal stripe are RC2* LINum, the signal lighties amount of images of vertical bar are RC3* LINum, to upward arrow signal lighties amount of images be RC4* LINum, to the left arrow signal lighties amount of images be RC5*LINum、 The signal lighties amount of images of right-hand arrow is RC6* LINum, the signal lighties amount of images of fork-shaped are RC7* the signal of LINum, pedestrian Lamp amount of images is RC8* LINum, the signal lighties amount of images of bicycle are RC9* the signal lighties picture number of LINum, other shapes Measure as RC10* LINum, andT1≤RCi< 0.1,0.1≤RCj≤T2, i ∈ { 1,2,3 }, j ∈ 4,5 ..., 10};
Wherein, LINum >=4000, T1∈ [0.05,0.08], T2∈[0.15,0.25]。
4. method as claimed in claim 2, it is characterised in that convolutional neural networks include in the initial training step:
Input layer, input width are Width, the coloured image highly for Height;
Ground floor convolutional layer, exports Th_CK1 convolution kernel, and it is 1 that the size of convolution kernel is CKSi1*CKSi1, step-length;
Ground floor pond layer, adopts maximum pond method to export the core that KSi*KSi, step-length are for 2;
Second layer convolutional layer, exports Th_CK2 convolution kernel, and it is 1 that the size of convolution kernel is CKSi2*CKSi2, step-length;
Second layer pond layer, adopts maximum pond method to export the core that KSi*KSi, step-length are for 2;
Third layer convolutional layer, exports Th_CK3 convolution kernel, and it is 1 that the size of convolution kernel is CKSi3*CKSi3, step-length;
The full articulamentum of ground floor, using ReLU as activation primitive, exports Th_Neur neuron;
The full articulamentum of the second layer, exports 10 neurons, i.e., 10 signal lighties shape classifications;
Wherein, 10 signal lighties shape classifications in the full articulamentum of the second layer include:Circle, horizontal stripe, upwards vertical bar, arrow Head, to the left arrow, right-hand arrow, fork-shaped, pedestrian, bicycle and other shapes;Width ∈ [18,50], Height ∈ [18, 50], Th_CK1 ∈ [6,20], CKSi1 be 3 or 5 or 7, KSi ∈ [2,4], Th_CK2 ∈ [10,40], CKSi2 be 3 or 5 or 7, Th_CK3 ∈ [10,40], CKSi3 are 3 or 5 or 7, Th_Neur ∈ [160,10000].
5. method as claimed in claim 2, it is characterised in that the second training step includes:
Training characteristics extraction step, according to initial training model, extracts the feature of TINum test image;
Training classification determination step, calculates similarity Simi of this feature and each signal lighties shape category featurek, k represents kth Individual classification, k ∈ { 1,2 ..., 10 } choose SimikThe maximum signal lighties shape classification of value is used as candidate signal lamp shape classification;
Repetition training step, calculates the error of result of determination and legitimate reading, using back-propagation algorithm come training pattern, repeats Training characteristics extraction step and training classification determination step, until the model is restrained;
Wherein, TINum >=1000.
6. the method for claim 1, it is characterised in that the third step includes:
Region selecting step to be split, for the coloured image that height is H, the vertical coordinate of selected pixels point is located at [0, λ1* H] in Color image region be region to be split;
Black pixel point extraction step, counts red color component value R (x, y), the green of pixel (x, y) in region to be split respectively Component value G (x, y), blue color component value B (x, y), if the value of above three component is both less than segmentation threshold T3, then by pixel (x, y) is labeled as black pixel point, and pixel (x, y) is labeled as non-black pixel otherwise, obtains the two of region to be split Value image;
Connected region process step, using black pixel point as foreground point, non-black pixel as background dot, using connected region Field mark method is treated the bianry image of cut zone and carries out connected component labeling, obtains a series of connected region of labellings;
Connected region screens step, counts the width W of each connected region respectivelyCRWith height HCR, calculate the width height of connected region Compare RWHWith the area S of the boundary rectangle of connected regionWH,SWH=WCR*HCRIf connected region meets T simultaneously4≤ RWH≤T5And SWH≥T6, then retain the connected region, otherwise filter the connected region;
Black region obtaining step, count retain connected region boundary rectangle coordinate position, from local range search to be split to The rectangular area of same coordinate position, exports the rectangular area as black region;
Wherein, λ1∈ [0.3,0.8], T3∈ [55,65], T4∈ [0.2,0.3], T5∈ [4,5], T6∈[15,25]。
7. the method for claim 1, it is characterised in that the 5th step includes:
Color segmentation step, counts red color component value R (x, y) of pixel (x, y), green component values G in lamp plate image respectively (x, y), blue color component value B (x, y), if R (x, y) > G (x, y) and R (x, y) > B (x, y), then be designated as redness by the pixel The pixel if G (x, y) > R (x, y) and G (x, y) > B (x, y), is then designated as green pixel point, if B (x, y) > by pixel R (x, y) and B (x, y) > G (x, y), then be designated as blue pixel point by the pixel, if AndThe pixel is designated as into yellow pixel point then, other pixels are otherwise labeled as;
Color region obtaining step, respectively using red, green, yellow and blue pixel point as foreground point, other pixels are made For background dot, connected component labeling is carried out using connected component labeling method to the bianry image of lamp plate image, obtain a series of marks The red, green of note, yellow and blue region;
Color lamp image acquisition step, the image for extracting correspondence redness, green, yellow and blue region in original image respectively are made For red light, green light, amber light and blue lamp image;
Wherein, T7∈[1.1,1.3]。
8. the method for claim 1, it is characterised in that the 6th step includes:
Identification feature extraction step, extracts red light, green light, amber light and blue lamp image respectively using signal lighties shape recognition model Feature;
Identification classification determination step, calculates the similarity of identification feature and each the signal lighties shape category feature for extracting Simik, k represents k-th classification, and k ∈ { 1,2 ..., 10 } choose SimikThe maximum classification of value is used as signal lighties shape classification.
9. the traffic signal identifying device based on convolutional neural networks, it is characterised in that the device includes:
Signal lighties shape recognition model training module, for choosing the sample image of label signal lamp shape, to convolutional Neural net Network is trained, and exports the signal lighties shape recognition model for training;
Scene image acquisition module, for the coloured image of scene is gathered by in-vehicle camera;
Black region extraction module, for choosing the region to be split of coloured image, carries out black dividing processing, extracts black region Domain;
Lamp plate image zooming-out module, for extracting the image of correspondence black region in original image as lamp plate image;
Color lamp image zooming-out module, for redness, green, yellow and blue region, and difference are partitioned into from lamp plate image The image of correspondence redness, green, yellow and blue region in original image is extracted as red light, green light, amber light and blue lamp image;
Color lamp picture shape identification module, for adopting the signal lighties shape recognition model for training to red light, green light, amber light Signal lighties shape recognition is carried out with blue lamp image;And
Signal lamp color and shape output module, for the CF recognition result of output signal light;
Wherein, the in-vehicle camera is installed at the position of the road image that can onboard collect traveling front.
10. device as claimed in claim 9, it is characterised in that the signal lighties shape recognition model training module includes:
Sample chooses module, for choosing the coloured image of LINum label signal lamp shape as sample image;
Initial training module, for being trained to sample image using convolutional neural networks, obtains initial training model;
Second training module, for choosing TINum test image, is instructed to test image repeatedly according to initial training model Practice, until model is restrained;
Model output module, for using the model of convergence is as signal lighties shape recognition model and exports.
11. devices as claimed in claim 10, it is characterised in that the sample is chosen according to certain ratio in choosing module The label image of unlike signal lamp shape;For the coloured image of LINum label signal lamp shape, wherein circular signal lighties Amount of images is RC1* LINum, the signal lighties amount of images of horizontal stripe are RC2* LINum, the signal lighties amount of images of vertical bar are RC3* LINum, to upward arrow signal lighties amount of images be RC4* LINum, to the left arrow signal lighties amount of images be RC5*LINum、 The signal lighties amount of images of right-hand arrow is RC6* LINum, the signal lighties amount of images of fork-shaped are RC7* the signal of LINum, pedestrian Lamp amount of images is RC8* LINum, the signal lighties amount of images of bicycle are RC9* the signal lighties picture number of LINum, other shapes Measure as RC10* LINum, andT1≤RCi< 0.1,0.1≤RCj≤T2, i ∈ { 1,2,3 }, j ∈ 4,5 ..., 10};
Wherein, LINum >=4000, T1∈ [0.05,0.08], T2∈[0.15,0.25]。
12. devices as claimed in claim 10, it is characterised in that convolutional neural networks include in the initial training module:
Input layer, input width are Width, the coloured image highly for Height;
Ground floor convolutional layer, exports Th_CK1 convolution kernel, and it is 1 that the size of convolution kernel is CKSi1*CKSi1, step-length;
Ground floor pond layer, adopts maximum pond method to export the core that KSi*KSi, step-length are for 2;
Second layer convolutional layer, exports Th_CK2 convolution kernel, and it is 1 that the size of convolution kernel is CKSi2*CKSi2, step-length;
Second layer pond layer, adopts maximum pond method to export the core that KSi*KSi, step-length are for 2;
Third layer convolutional layer, exports Th_CK3 convolution kernel, and it is 1 that the size of convolution kernel is CKSi3*CKSi3, step-length;
The full articulamentum of ground floor, using ReLU as activation primitive, exports Th_Neur neuron;
The full articulamentum of the second layer, exports 10 neurons, i.e., 10 signal lighties shape classifications;
Wherein, 10 signal lighties shape classifications in the full articulamentum of the second layer include:It is circular, to upward arrow, to the left arrow, Right-hand arrow, fork-shaped and other shapes;Width ∈ [18,50], Height ∈ [18,50], Th_CK1 ∈ [6,20], CKSi1 are 3 or 5 or 7, KSi ∈ [2,4], Th_CK2 ∈ [10,40], CKSi2 are 3 or 5 or 7, Th_CK3 ∈ [10,40], CKSi3 is 3 or 5 or 7, Th_Neur ∈ [160,10000].
13. devices as claimed in claim 10, it is characterised in that the second training module includes:
Training characteristics extraction module, for according to initial training model, extracting the feature of TINum test image;Training classification Determination module, for calculating similarity Simi of this feature and each signal lighties shape category featurek, k represents k-th classification, k ∈ { 1,2 ..., 10 }, chooses SimikThe maximum signal lighties shape classification of value is used as candidate signal lamp shape classification;
Repetition training module, for calculating the error of result of determination and legitimate reading, using back-propagation algorithm come training pattern, Repetition training characteristic extracting module and training classification determination module, until the model is restrained;
Wherein, TINum >=1000.
14. devices as claimed in claim 9, it is characterised in that the black region extraction module includes:
Module is chosen in region to be split, for being located at [0, λ for the vertical coordinate of the coloured image that height is H, selected pixels point1* H] in color image region be region to be split;
Black pixel point extraction module, for count respectively pixel (x, y) in region to be split red color component value R (x, y), Green component values G (x, y), blue color component value B (x, y), if the value of above three component is both less than segmentation threshold T3, then by pixel Point (x, y) is labeled as black pixel point, and pixel (x, y) is labeled as non-black pixel otherwise, obtains region to be split Bianry image;
Connected region processing module, for using black pixel point as foreground point, non-black pixel as background dot, using even Logical zone marker method is treated the bianry image of cut zone and carries out connected component labeling, obtains a series of connected region of labellings;
Connected region screening module, for counting the width W of each connected region respectivelyCRWith height HCR, calculate connected region The ratio of width to height RWHWith the area S of the boundary rectangle of connected regionWH,SWH=WCR*HCRIf connected region meets simultaneously T4≤RWH≤T5And SWH≥T6, then retain the connected region, otherwise filter the connected region;
Black region acquisition module, for counting the coordinate position of the connected region boundary rectangle for retaining, searches from region to be split The rectangular area of same coordinate position is sought, is exported the rectangular area as black region;
Wherein, λ1∈ [0.3,0.8], T3∈ [55,65], T4∈ [0.2,0.3], T5∈ [4,5], T6∈[15,25]。
15. devices as claimed in claim 9, it is characterised in that the color lamp image zooming-out module includes:Color segmentation mould Block, for counting red color component value R (x, y) of pixel (x, y) in lamp plate image, green component values G (x, y), blueness respectively The pixel if R (x, y) > G (x, y) and R (x, y) > B (x, y), is then designated as red pixel point, if G by component value B (x, y) (x, y) > R (x, y) and G (x, y) > B (x, y), then be designated as green pixel point by the pixel, if B (x, y) > R (x, y) and B (x, y) > G (x, y), then be designated as blue pixel point by the pixel, ifAndThe pixel is designated as into yellow pixel point then, other pixels are otherwise labeled as;
Color region acquisition module, for respectively using red, green, yellow and blue pixel point as foreground point, other pixels Point to the bianry image of lamp plate image carries out connected component labeling using connected component labeling method as background dot, obtains one and is The red, green of row labelling, yellow and blue region;
Color lamp image collection module, for extracting the figure of red correspondence in original image, green, yellow and blue region respectively As red light, green light, amber light and blue lamp image;
Wherein, T7∈[1.1,1.3]。
16. devices as claimed in claim 9, it is characterised in that the color lamp picture shape identification module includes:Identification is special Extraction module is levied, for extracting the feature of red light, green light, amber light and blue lamp image using signal lighties shape recognition model respectively;
Identification classification determination module, for calculating the similarity of the identification feature extracted and each signal lighties shape category feature Simik, k represents k-th classification, and k ∈ { 1,2 ..., 10 } choose SimikThe maximum classification of value is used as signal lighties shape classification.
CN201611021336.1A 2016-11-21 2016-11-21 Traffic signal lamp recognition method and device based on convolution neural network Pending CN106570494A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201611021336.1A CN106570494A (en) 2016-11-21 2016-11-21 Traffic signal lamp recognition method and device based on convolution neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611021336.1A CN106570494A (en) 2016-11-21 2016-11-21 Traffic signal lamp recognition method and device based on convolution neural network

Publications (1)

Publication Number Publication Date
CN106570494A true CN106570494A (en) 2017-04-19

Family

ID=58542346

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611021336.1A Pending CN106570494A (en) 2016-11-21 2016-11-21 Traffic signal lamp recognition method and device based on convolution neural network

Country Status (1)

Country Link
CN (1) CN106570494A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107301773A (en) * 2017-06-16 2017-10-27 上海肇观电子科技有限公司 A kind of method and device to destination object prompt message
CN107403169A (en) * 2017-08-08 2017-11-28 上海识加电子科技有限公司 Signal lamp detection recognition method and device
CN107644538A (en) * 2017-11-01 2018-01-30 广州汽车集团股份有限公司 The recognition methods of traffic lights and device
CN107689157A (en) * 2017-08-30 2018-02-13 电子科技大学 Traffic intersection based on deep learning can passing road planing method
CN108108761A (en) * 2017-12-21 2018-06-01 西北工业大学 A kind of rapid transit signal lamp detection method based on depth characteristic study
CN108189043A (en) * 2018-01-10 2018-06-22 北京飞鸿云际科技有限公司 A kind of method for inspecting and crusing robot system applied to high ferro computer room
WO2018201835A1 (en) * 2017-05-03 2018-11-08 腾讯科技(深圳)有限公司 Signal light state recognition method, device and vehicle-mounted control terminal and motor vehicle
CN109389079A (en) * 2018-09-30 2019-02-26 无锡职业技术学院 A kind of traffic lights recognition methods
CN109544955A (en) * 2018-12-26 2019-03-29 广州小鹏汽车科技有限公司 A kind of state acquiring method and system of traffic lights
CN109711227A (en) * 2017-10-25 2019-05-03 北京京东尚科信息技术有限公司 Traffic light recognition method, traffic light identifier and computer readable storage medium
WO2019119356A1 (en) * 2017-12-21 2019-06-27 华为技术有限公司 Information detection method and mobile device
CN110660254A (en) * 2018-06-29 2020-01-07 北京市商汤科技开发有限公司 Traffic signal lamp detection and intelligent driving method and device, vehicle and electronic equipment
CN110992713A (en) * 2019-12-17 2020-04-10 多伦科技股份有限公司 Traffic signal vehicle-road cooperative data construction method
CN112733815A (en) * 2021-03-30 2021-04-30 广州赛特智能科技有限公司 Traffic light identification method based on RGB outdoor road scene image
CN113497926A (en) * 2020-04-08 2021-10-12 美光科技公司 Intelligent correction of visual defects

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110182469A1 (en) * 2010-01-28 2011-07-28 Nec Laboratories America, Inc. 3d convolutional neural networks for automatic human action recognition
CN102542260A (en) * 2011-12-30 2012-07-04 中南大学 Method for recognizing road traffic sign for unmanned vehicle
CN102819728A (en) * 2012-07-17 2012-12-12 中国航天科工集团第三研究院第八三五七研究所 Traffic sign detection method based on classification template matching
CN104050447A (en) * 2014-06-05 2014-09-17 奇瑞汽车股份有限公司 Traffic light identification method and device
CN105956524A (en) * 2016-04-22 2016-09-21 北京智芯原动科技有限公司 Method and device for identifying traffic signs

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110182469A1 (en) * 2010-01-28 2011-07-28 Nec Laboratories America, Inc. 3d convolutional neural networks for automatic human action recognition
CN102542260A (en) * 2011-12-30 2012-07-04 中南大学 Method for recognizing road traffic sign for unmanned vehicle
CN102819728A (en) * 2012-07-17 2012-12-12 中国航天科工集团第三研究院第八三五七研究所 Traffic sign detection method based on classification template matching
CN104050447A (en) * 2014-06-05 2014-09-17 奇瑞汽车股份有限公司 Traffic light identification method and device
CN105956524A (en) * 2016-04-22 2016-09-21 北京智芯原动科技有限公司 Method and device for identifying traffic signs

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018201835A1 (en) * 2017-05-03 2018-11-08 腾讯科技(深圳)有限公司 Signal light state recognition method, device and vehicle-mounted control terminal and motor vehicle
CN107301773A (en) * 2017-06-16 2017-10-27 上海肇观电子科技有限公司 A kind of method and device to destination object prompt message
CN107403169A (en) * 2017-08-08 2017-11-28 上海识加电子科技有限公司 Signal lamp detection recognition method and device
CN107403169B (en) * 2017-08-08 2018-09-28 上海识加电子科技有限公司 Signal lamp detection recognition method and device
CN107689157B (en) * 2017-08-30 2021-04-30 电子科技大学 Traffic intersection passable road planning method based on deep learning
CN107689157A (en) * 2017-08-30 2018-02-13 电子科技大学 Traffic intersection based on deep learning can passing road planing method
CN109711227A (en) * 2017-10-25 2019-05-03 北京京东尚科信息技术有限公司 Traffic light recognition method, traffic light identifier and computer readable storage medium
CN107644538B (en) * 2017-11-01 2020-10-23 广州汽车集团股份有限公司 Traffic signal lamp identification method and device
CN107644538A (en) * 2017-11-01 2018-01-30 广州汽车集团股份有限公司 The recognition methods of traffic lights and device
CN108108761B (en) * 2017-12-21 2020-05-01 西北工业大学 Rapid traffic signal lamp detection method based on deep feature learning
WO2019119356A1 (en) * 2017-12-21 2019-06-27 华为技术有限公司 Information detection method and mobile device
CN108108761A (en) * 2017-12-21 2018-06-01 西北工业大学 A kind of rapid transit signal lamp detection method based on depth characteristic study
CN108189043A (en) * 2018-01-10 2018-06-22 北京飞鸿云际科技有限公司 A kind of method for inspecting and crusing robot system applied to high ferro computer room
CN110660254A (en) * 2018-06-29 2020-01-07 北京市商汤科技开发有限公司 Traffic signal lamp detection and intelligent driving method and device, vehicle and electronic equipment
CN110660254B (en) * 2018-06-29 2022-04-08 北京市商汤科技开发有限公司 Traffic signal lamp detection and intelligent driving method and device, vehicle and electronic equipment
CN109389079A (en) * 2018-09-30 2019-02-26 无锡职业技术学院 A kind of traffic lights recognition methods
CN109389079B (en) * 2018-09-30 2022-02-15 无锡职业技术学院 Traffic signal lamp identification method
CN109544955A (en) * 2018-12-26 2019-03-29 广州小鹏汽车科技有限公司 A kind of state acquiring method and system of traffic lights
CN110992713A (en) * 2019-12-17 2020-04-10 多伦科技股份有限公司 Traffic signal vehicle-road cooperative data construction method
CN113497926A (en) * 2020-04-08 2021-10-12 美光科技公司 Intelligent correction of visual defects
US11587314B2 (en) 2020-04-08 2023-02-21 Micron Technology, Inc. Intelligent correction of vision deficiency
CN112733815A (en) * 2021-03-30 2021-04-30 广州赛特智能科技有限公司 Traffic light identification method based on RGB outdoor road scene image

Similar Documents

Publication Publication Date Title
CN106570494A (en) Traffic signal lamp recognition method and device based on convolution neural network
CN110020651B (en) License plate detection and positioning method based on deep learning network
CN103605977B (en) Extracting method of lane line and device thereof
CN109740478B (en) Vehicle detection and identification method, device, computer equipment and readable storage medium
CN106599792B (en) Method for detecting hand driving violation behavior
CN109389046B (en) All-weather object identification and lane line detection method for automatic driving
CN103824081B (en) Method for detecting rapid robustness traffic signs on outdoor bad illumination condition
CN106909937A (en) Traffic lights recognition methods, control method for vehicle, device and vehicle
CN106845487A (en) A kind of licence plate recognition method end to end
CN106778646A (en) Model recognizing method and device based on convolutional neural networks
CN105956524A (en) Method and device for identifying traffic signs
CN109711264A (en) A kind of bus zone road occupying detection method and device
CN103544484A (en) Traffic sign identification method and system based on SURF
CN107516064A (en) Use the parallel scene unit detection around camera chain
CN111931683B (en) Image recognition method, device and computer readable storage medium
CN107066972A (en) Natural scene Method for text detection based on multichannel extremal region
CN101369312B (en) Method and equipment for detecting intersection in image
CN107590500A (en) A kind of color recognizing for vehicle id method and device based on color projection classification
CN107704833A (en) A kind of front vehicles detection and tracking based on machine learning
CN106919939A (en) A kind of traffic signboard Tracking Recognition method and system
CN104008404B (en) Pedestrian detection method and system based on significant histogram features
Zhang et al. Automatic detection of road traffic signs from natural scene images based on pixel vector and central projected shape feature
CN111160194B (en) Static gesture image recognition method based on multi-feature fusion
CN113269161A (en) Traffic signboard detection method based on deep learning
Chiang et al. Road speed sign recognition using edge-voting principle and learning vector quantization network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170419