CN106529391A - Robust speed-limit traffic sign detection and recognition method - Google Patents

Robust speed-limit traffic sign detection and recognition method Download PDF

Info

Publication number
CN106529391A
CN106529391A CN201610810614.5A CN201610810614A CN106529391A CN 106529391 A CN106529391 A CN 106529391A CN 201610810614 A CN201610810614 A CN 201610810614A CN 106529391 A CN106529391 A CN 106529391A
Authority
CN
China
Prior art keywords
super
pixel
saliency map
region
neighborhood
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610810614.5A
Other languages
Chinese (zh)
Other versions
CN106529391B (en
Inventor
赵祥模
刘占文
沈超
王润民
徐江
高涛
杨楠
李强
王姣姣
周洲
樊星
林杉
张珂
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Changan University
Original Assignee
Changan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Changan University filed Critical Changan University
Priority to CN201610810614.5A priority Critical patent/CN106529391B/en
Publication of CN106529391A publication Critical patent/CN106529391A/en
Application granted granted Critical
Publication of CN106529391B publication Critical patent/CN106529391B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques

Abstract

The present invention discloses a robust speed-limit traffic sign detection and recognition method. The method comprises the steps of firstly, establishing a multi-feature fusion type saliency model, and subjecting each layer of the multi-feature fusion type significance model to updating and iteration to obtain a level saliency map; secondly, solving a multilayer saliency map to obtain an optimal saliency map, obtaining an ROI from the optimal saliency map, loading the obtained ROI into a super-pixel-based pre-trained CNN model to classify the ROI, and obtaining a recognition result. According to the technical scheme of the invention, traffic signs on both sides can be better highlighted through the saliency model based on a-priori position and boundary features. Meanwhile, the structural information of an image is effectively utilized through the multi-layer fusion type saliency map. Moreover, multiple small-scale detail information in a circular sign is maintained, so that a target is more complete and uniform. Therefore, the recognition efficiency and the recognition accuracy are improved.

Description

A kind of speed limit road traffic sign detection of robust and recognition methods
Technical field
The invention belongs to computer vision field, is related to a kind of image-recognizing method, and in particular to a kind of speed limit of robust Road traffic sign detection and recognition methods.
Background technology
With the development of economic technology, automobile plays more and more important role, people couple in people's daily life The demand of automobile is also more and more, engenders including various safety assistant driving technologies, such as adaptive cruise control system, anti- Hit system, expected sensing collision system, parking stall identifying system and night vision system etc..When scientific technological advance to a certain extent, most The unmanned technology of safe and efficient automobile will be realized eventually, that is, realize automatic driving car.According to statistics, every year in traffic accident Middle dead number about 1,000,000 people, and the overwhelming majority is accounted for by the death by accident that human operational error causes;And nobody drives The technology of sailing can reduce the situation for human operational error occur to a great extent, the more security compared with orthodox car.Nothing It is all study hotspot all the time that people drives car, because it all has broad application prospects in many fields, such as in military neck The tasks such as domain can replace personnel to complete to investigate, go on patrol, searches for, succouring, transport goods;And to the speed limit traffic sign in road Detected and recognized it is a key technology in automatic driving car research, its result will be directly affected the speed of automatic driving car Degree and security.
There are many conventional methods with regard to road traffic sign detection and recognition methods to be used for the detection of speed limit traffic sign at present With identification, such as examined with recognizer, the traffic sign based on HOG features+SVM classifier based on template matches road traffic sign detection Survey and recognizer, the road traffic sign detection combined based on LogitBoost Waterfall type cascade classifiers and knowledge method for distinguishing, base Road traffic sign detection and recognizer in BP neural network etc..As traffic sign is subject to the shadow of service life and external environment Ring, be also easy to produce be stained colour fading, distortion distortion, it is reflective the problems such as, above-mentioned recognition methods a certain feature all only to target image It is analyzed, it is under-utilized to the effective information in target image, so as to cause to object detection and recognition accuracy rate than relatively low, It is actually detected not good with recognition effect.
The content of the invention
For the deficiency that above-mentioned prior art is present, it is an object of the present invention to human visual attention mechanism mechanism is used for reference, A kind of graph model level conspicuousness detection model merged with multi-stage characteristics based on prior information constraint is proposed, region of interest is extracted Candidate regions are carried out feature extraction and classifying in conjunction with CNN by domain ROI, set up the speed limit Traffic Sign Recognition System of a robust, Solve the problems, such as that prior art is low to object detection and recognition accuracy rate.
For solving above-mentioned technical problem, technical scheme below is present invention employs:
A kind of speed limit road traffic sign detection of robust and recognition methods, specifically include following steps:
Step one:Using undue segmentation method, super-pixel segmentation is carried out to original image, the super picture after being split by original image Sketch map mapping obtains undirected weight graph;
Step 2:According to the undirected weight graph that step one is obtained, using the priori position binding characteristic and local feature of target, The conspicuousness model of multiple features fusion is set up, wherein, local feature includes color characteristic and boundary characteristic;
Step 3, the conspicuousness model of the multiple features fusion obtained based on step 2, is set up first in multilayer saliency map The merging rule function on layer summit, is carried out more by the conspicuousness model to each layer of multiple features fusion in multilayer saliency map Newly and iteration, obtain multilayer saliency map;
Step 4, solves to multilayer saliency map, obtains optimum saliency map, obtains on optimum saliency map ROI image;
Step 5:Selected part training sample is trained to CNN models, obtains the CNN models for training;
Step 6:The ROI image that step 4 is obtained is identified using the CNN models for training;
The present invention also has following distinguishing feature:
Further, in described step one, undirected weight graph is expressed as:G=(V, E), wherein, V is summit in undirected weight graph Set, by super-pixel region representation, V={ 1,2 ..., i }, E are the set on side in undirected weight graph.
Further, the concrete steps of step 2 include:
Step 2.1:Assume xiFor super-pixel region RiPixel, calculate super-pixel region RiInterior xiThe quantity of pixel, is designated as N(xi);Hypothesis priori position is xc, setting position prior probability pcAnd weighted value λ (typically taking 0.1~0.2), calculate conspicuousness Contribution degree, formula is as follows:
Step 2.2:If ΩiFor super-pixel region RiNeighborhood, calculate RiWith ΩiInterior neighbouring super pixels region RjBorder it is strong Degree B (Ri,Rj), boundary intensity is sued for peace, is designated asCalculate super-pixel region RiNeighborhood ΩiIn interior Super-pixel region RjQuantity N (Rj);Super-pixel region R is calculated respectivelyi, RjColor average Ci, Cj;Finally, calculate each to surpass Neighborhood of pixels contrast Ni, used as the notable angle value on each summit in undirected weight graph, computing formula is as follows:
Step 2.3:The neighborhood contrast that the constraint function of the priori position obtained based on step 2.1 and step 2.2 are obtained, Set up the conspicuousness model of multiple features fusion:
si=Li*Ni
Further, the concrete steps of step 3 include:
Step 3.1:If S is (R1, R2) it is super-pixel region R1With super-pixel region R2Significant correlation, work as R1、R2It is adjacent, And R1、R2Significant correlation in R1Neighborhood Ω1With R2Neighborhood Ω2When being all inside minimum, merge;Otherwise, nonjoinder;
Wherein, s1, s2Super-pixel region R in multiple features fusion conspicuousness model is represented respectively1, R2Corresponding notable angle value; si, sjRepresent super-pixel region R in multiple features fusion conspicuousness model1, R2The notable angle value of corresponding super-pixel neighborhood.
Step 3.2:After region merging technique, repeat step two, until cause maximum speed(-)limit sign in top saliency map In present large-scale structure till, i.e., when target differs larger with the boundary intensity of background, finally give multilayer saliency map.
Further, the concrete steps of step 4 include:
Step 4.1:Multilayer saliency map is solved using cost function is minimized, obtain optimum saliency map;
Step 4.2:According to the style characteristic of target, length-width ratio is taken in the target for 1~2:1 region is used as detection window Mouthful, the minimum super-pixel region with maximum in detection window is removed, on optimum saliency map, slip detection window obtains ROI figures Picture;
Further, the concrete steps of step 4.1 include:
Step 4.1.1:By each summit s in undirected weight graphiAs a chance event, stochastic variable collection S={ si|i∈ V } it is defined as the Markov random field with regard to neighborhood system Ω on set V;Based on priori position binding characteristic and local feature Information, sets up ground floor S in multilayer saliency map0In each super-pixel significance penalty:
Wherein, Represent S0Middle region i corresponding notable angle value, V in l layer saliency mapslRepresent SlThe vertex set of corresponding graph model, parameter sets of the θ for needed for building original notable figure, including λ, xp、kiThree parameters, λ is Weighted value, xpFor the priori position assumed, kiFor the super-pixel number of regions that over-segmentation is obtained;
Step 4.1.2:Set up the significance penalty of summit interaction between layers in multilayer saliency map:
That is the significance of l layers summit i is changed into the cost needed for the significance of the l+1 layers summit j after merging;Wherein Show that edge energy item only calculates the contrast of intersection Degree,The neighborhood local contrast of respectively l layers region i and l+1 layers region p,For l layers region i and l+1 The difference of the priori position constraint of layer region p, is weighed using Euclidean distance, and balance factor β is used for balancing the shadow of two cost items Ring;
Step 4.1.3:The Energy minimization solution minimum cost function that figure cuts is finally based on, the aobvious of optimum is obtained Work degree figure;
Further, the concrete steps of step 5 include:
Step 5.1:Super-pixel segmentation is carried out to the training sample chosen using undue segmentation method, each training sample is obtained This super-pixel figure, the ratio that randomly selects in each training sample be 10%~30% super-pixel as the training sample Center, filling is extended to super-pixel figure with the average pixel value on the training sample border, obtains being obtained with step 4 ROI image size identical blank map picture;
Step 5.2:Filling image tag is determined according to the area Duplication of blank map picture and training sample, when Duplication it is big When 50%, the label of blank map picture is 1;When Duplication is less than 50%, its label is 0;
Step 5.3:CNN models are sent into the blank map picture through step 5.2 to be trained, the CNN moulds for training are obtained Type;
Further, step 6 is concretely comprised the following steps:For target, using the CaffeNet models after training to step 4 The ROI image for obtaining is classified, and provides final classification results by grader;
Patent of the present invention has the beneficial effect that:
(1) the priori position binding characteristic and local feature of target image are considered, it is abundant to the information of target image Utilize, advantageously in the accurate detection of speed(-)limit sign.
(2) by building level conspicuousness model and optimum saliency map, at utmost strengthen the notable of speed(-)limit sign Property, the conspicuousness of background is reduced, so as to improve Detection results.
(3) in training and cognitive phase, vision noticing mechanism mechanism is incorporated in target detection, and is directed to speed(-)limit sign Employing super-pixel pre-training strategy CNN is trained, it is so as to improve study and the recognition capability of CNN models, final to produce Effect be conventional method institute it is untouchable.
Description of the drawings
Fig. 1 is the flow chart of the inventive method.
Fig. 2 is that level merges regular schematic diagram.
Fig. 3 is concrete level merging process schematic diagram.
Fig. 4 is to obtain ROI schematic diagrames based on optimum notable figure.
Fig. 5 is super-pixel pre-training strategy.
Fig. 6 is CNN frameworks.
Fig. 7 is result of the test.
The present invention is further explained with specific embodiment below in conjunction with accompanying drawing.
Specific embodiment
In order that the purpose of the present invention, technical scheme and advantage are clearer, the present invention is done with reference to drawings and Examples Further describe;A kind of speed limit road traffic sign detection of robust and recognition methods, it is characterised in that use for reference human vision note Meaning mechanism mechanism, proposes a kind of graph model level conspicuousness detection model merged with multi-stage characteristics based on prior information constraint, Region of interest ROI is extracted, feature extraction and classifying is carried out to candidate regions in conjunction with CNN, the speed limit traffic of a robust is set up Sign recognition system;Specifically include following steps:
Step one:Using undue segmentation method, super-pixel segmentation is carried out to original image, the super picture after being split by original image Sketch map mapping obtains undirected weight graph, is expressed as:G=(V, E), wherein, V is the set on summit in undirected weight graph, by super-pixel area Domain representation, V={ 1,2 ..., i }, E are the set on side in undirected weight graph;
Step 2:On the basis of the undirected weight graph that step one is obtained, using priori position binding characteristic and the office of target Portion's feature, sets up the conspicuousness model of multiple features fusion, and wherein, local feature includes color characteristic and boundary characteristic;
Step 2.1:Assume xiFor super-pixel region RiPixel, calculate super-pixel region RiInterior xiThe quantity of pixel, is designated as N(xi);Hypothesis priori position is xc, setting position prior probability pcAnd weighted value λ (typically taking 0.1~0.2), calculate conspicuousness Contribution degree, formula is as follows:
Step 2.2:If ΩiFor super-pixel region RiNeighborhood, calculate RiWith ΩiInterior neighbouring super pixels region RjBorder Intensity B (Ri,Rj), boundary intensity is sued for peace, is designated asCalculate super-pixel region RiNeighborhood ΩiIt is interior Middle super-pixel region RjQuantity N (Rj);Super-pixel region R is calculated respectivelyi, RjColor average Ci, Cj;Finally, calculate each Super-pixel neighborhood contrast Ni, used as the notable angle value on each summit in undirected weight graph, computing formula is as follows:
Step 2.3:The neighborhood contrast that the constraint function of the priori position obtained based on step 2.1 and step 2.2 are obtained, Set up the conspicuousness model of multiple features fusion:
si=Li*Ni
Step 3, the conspicuousness model of the multiple features fusion obtained based on step 2, is set up first in multilayer saliency map The merging rule function on layer summit, is updated by the conspicuousness model to each layer of multiple features fusion in layer saliency map With iteration, multilayer saliency map S is obtained0,S1,S2,S1,…,Sk, it is as shown in Figure 2 that level merges regular schematic diagram;
Step 3.1:If S is (R1, R2) it is super-pixel region R1With super-pixel region R2Significant correlation, work as R1、R2It is adjacent, And R1、R2Significant correlation in R1Neighborhood Ω1With R2Neighborhood Ω2When being all inside minimum, merge;Otherwise, nonjoinder;
Wherein, s1, s2Super-pixel region R in multiple features fusion conspicuousness model is represented respectively1, R2Corresponding notable angle value; si, sjRepresent super-pixel region R in multiple features fusion conspicuousness model1, R2The notable angle value of corresponding super-pixel neighborhood.
Step 3.2:After region merging technique, repeat step two, until cause maximum speed(-)limit sign in top saliency map In present large-scale structure till, i.e., when reaching multilayer saliency map S4When, multilayer saliency map is finally given, for concrete Detection task, level merging process are as shown in Figure 3;
Step 4, solves to multilayer saliency map, obtains optimum saliency map, inspection of sliding on optimum saliency map Window is surveyed, ROI image is obtained, for concrete Detection task, the process for obtaining ROI is as shown in Figure 4;
Step 4.1:Using Markov random field and the equivalence of Gibbs Distribution, and adopt two gene cluster potential energy structures The cost function of fusion multilayer saliency map is built, multilayer saliency map is solved using cost function is minimized, is obtained most Excellent saliency map;
Step 4.1.1:By each summit s in undirected weight graphiAs a chance event, stochastic variable collection S={ si|i∈ V } it is defined as the Markov random field with regard to neighborhood system Ω on set V;Based on priori position binding characteristic and local feature Information, sets up ground floor S in multilayer saliency map0In each super-pixel significance penalty:
Wherein, Represent S0Middle region i corresponding notable angle value, V in l layer saliency mapslTable Show SlThe vertex set of corresponding graph model, parameter sets of the θ for needed for building original notable figure, including λ, xp、kiThree parameters, λ For weighted value, xpFor the priori position assumed, kiFor the super-pixel number of regions that over-segmentation is obtained;
Step 4.1.2:Set up the significance penalty of summit interaction between layers in multilayer saliency map:
That is the significance of l layers summit i is changed into the cost needed for the significance of the l+1 layers summit j after merging;Wherein Show that edge energy item only calculates the contrast of intersection Degree,The neighborhood local contrast of respectively l layers region i and l+1 layers region p,For l layers region i and l+1 The difference of the priori position constraint of layer region p, is weighed using Euclidean distance, and balance factor β is used for balancing the shadow of two cost items Ring;
Step 4.1.3:The Energy minimization solution minimum cost function that figure cuts is finally based on, the aobvious of optimum is obtained Work degree figure;
Step 4.2:According to the style characteristic of speed(-)limit sign, length-width ratio is taken in speed(-)limit sign for 2:1 region is used as inspection Window is surveyed, the minimum super-pixel region with maximum in detection window is removed, slip detection window is obtained on optimum saliency map ROI image;
Step 5:Selected part training sample is trained to CNN models, obtains training CNN models;
Step 5.1:Super-pixel segmentation is carried out to the training sample chosen using undue segmentation method, each training sample is obtained This super-pixel figure, the ratio that randomly selects in each training sample be 10%~30% super-pixel as the training sample Center, wherein, it is 10% that the sample big for area chooses ratio, and it is 30% that the little sample of area chooses ratio, with the instruction The average pixel value for practicing the super-pixel of sample boundary is extended filling, obtains and CNN input ROI image size identical fillings Image, concrete super-pixel pre-training strategy are as shown in Figure 5;
Step 5.2:Filling image tag is determined according to the area Duplication of blank map picture and training sample, when Duplication it is big When 50%, the label of blank map picture is 1;When Duplication is less than 50%, its label is 0;
Step 5.3:CNN models are sent into the blank map picture through step 5.2 to be trained, obtains training CNN models, such as Shown in Fig. 6;
Step 6:The ROI image that step 4 is obtained is identified using training CNN models, is concretely comprised the following steps:For Target, is classified to the ROI image that step 4 is obtained using the CaffeNet frameworks after training, wherein, CaffeNet frameworks It is made up of 5 layers of convolutional layer, each convolutional layer configuration respectively 9 × 9 × 84conv → 2 × 2maxpooling → 3 × 3 × 126conv→2×2maxpooling→4×4×252conv→1×1×66conv→3×3×126conv→2× 2maxpooling;Finally, final classification results are provided by softmax graders.
Effect analysis:
For verify the inventive method validity, choose GSTDB as detection data collection, GSTRB as training dataset, Training sample size 15Pixels × 15Pixels to 250Pixels × 250Pixels, carries out over-segmentation to training sample And extension be filled to input CNN ROI it is equal in magnitude, be all 48Pixels × 48Pixels, finally give based on super-pixel The training sample set of CNN pre-training strategies includes more than 100 ten thousand training images, more than 50 ten thousand sample graphs for being used for cross validation Picture;Prior probability for causing left, center, right region in target image occupies rational proportion, adjusts its speed(-)limit sign prior probability Respectively 26.7%, 22.2% and 52.1%;Select GC, BL and BSCA algorithm and the present invention to be contrasted respectively, tested As a result it is as shown in Figure 7;
As can be seen that being directed to speed(-)limit sign Detection task, GC algorithms are using global contrast detection such as weak contrast in Fig. 7 Test image when, testing result is very undesirable;BSCA algorithms are based on the less priori of image surrounding border target probability of occurrence Information, therefore the significance of central area is significantly higher, for concrete Detection task poor robustness;And BL algorithms and present invention calculation Method all remains more complete image large-scale structure information;Inventive algorithm is aobvious with boundary characteristic based on priori position simultaneously Work property model can preferably project the traffic sign of both sides, and the saliency map after multi-level Fusion efficiently utilizes figure The structural information of picture, and many little yardstick detailed information in circle marker is remained, make target more complete uniform, be conducive to carrying The efficiency and precision of high identification;In addition, being trained to SVM classifier and being tested based on identical data set, total identification is just Really rate is 95.73%, and in the present invention, the recognition correct rate of CNN is 97.85%, is demonstrated for concrete speed limit traffic mark again The present invention of will has more preferable feature extraction and classifying ability.

Claims (8)

1. a kind of speed limit road traffic sign detection of robust and recognition methods, step one:Using undue segmentation method, original image is entered Row super-pixel segmentation, the super-pixel figure mapping after being split by original image obtain undirected weight graph;Characterized in that, also including following Step:
Step 2:According to the undirected weight graph that step one is obtained, using the priori position binding characteristic and local feature of target, set up The conspicuousness model of multiple features fusion, wherein, local feature includes color characteristic and boundary characteristic;
Step 3, the conspicuousness model of the multiple features fusion obtained based on step 2 set up ground floor top in multilayer saliency map Point merging rule function, by the conspicuousness model to each layer of multiple features fusion in multilayer saliency map be updated with Iteration, obtains multilayer saliency map;
Step 4, solves to multilayer saliency map, obtains optimum saliency map, obtains ROI figures on optimum saliency map Picture;
Step 5:Selected part training sample is trained to CNN models, obtains the CNN models for training;
Step 6:The ROI image that step 4 is obtained is identified using the CNN models for training.
2. the speed limit road traffic sign detection of robust as claimed in claim 1 and recognition methods, it is characterised in that described step In one, undirected weight graph is expressed as:G=(V, E), wherein, V is the set on summit in undirected weight graph, by super-pixel region representation, V= { 1,2 ..., i }, E are the set on side in undirected weight graph.
3. the speed limit road traffic sign detection of robust as claimed in claim 1 and recognition methods, it is characterised in that the tool of step 2 Body step includes:
Step 2.1:Assume xiFor super-pixel region RiPixel, calculate super-pixel region RiInterior xiThe quantity of pixel, is designated as N (xi);Hypothesis priori position is xc, setting position prior probability pcAnd weighted value λ (typically taking 0.1~0.2), calculate conspicuousness Contribution degree, formula are as follows:
Step 2.2:If ΩiFor super-pixel region RiNeighborhood, calculate RiWith ΩiInterior neighbouring super pixels region RjBoundary intensity B (Ri,Rj), boundary intensity is sued for peace, is designated asCalculate super-pixel region RiNeighborhood ΩiSurpass in interior Pixel region RjQuantity N (Rj);Super-pixel region R is calculated respectivelyi, RjColor average Ci, Cj;Finally, calculate each super picture Plain neighborhood contrast Ni, used as the notable angle value on each summit in undirected weight graph, computing formula is as follows:
Step 2.3:The neighborhood contrast that the constraint function of the priori position obtained based on step 2.1 and step 2.2 are obtained, sets up The conspicuousness model of multiple features fusion:
si=Li*Ni
4. the speed limit road traffic sign detection of robust as claimed in claim 1 and recognition methods, it is characterised in that the tool of step 3 Body step includes:
Step 3.1:If S is (R1, R2) it is super-pixel region R1With super-pixel region R2Significant correlation, work as R1、R2It is adjacent, and R1、 R2Significant correlation in R1Neighborhood Ω1With R2Neighborhood Ω2When being all inside minimum, merge;Otherwise, nonjoinder;
Wherein, s1, s2Super-pixel region R in multiple features fusion conspicuousness model is represented respectively1, R2Corresponding notable angle value;si, sj Represent super-pixel region R in multiple features fusion conspicuousness model1, R2The notable angle value of corresponding super-pixel neighborhood;
Step 3.2:After region merging technique, repeat step two, until the speed(-)limit sign for causing maximum is in top saliency map Till revealing large-scale structure, i.e., when target differs larger with the boundary intensity of background, multilayer saliency map S is finally given0, S1..., Sk
5. the speed limit road traffic sign detection of robust as claimed in claim 1 and recognition methods, it is characterised in that the tool of step 4 Body step includes:
Step 4.1:Multilayer saliency map is solved using cost function is minimized, obtain optimum saliency map;
Step 4.2:According to the style characteristic of target, length-width ratio is taken in the target for 1~2:1 region is gone as detection window Except less with larger super-pixel region in detection window, on optimum saliency map, slip detection window obtains ROI image.
6. the speed limit road traffic sign detection of robust as claimed in claim 5 and recognition methods, it is characterised in that step 4.1 Concrete steps include:
Step 4.1.1:By each summit s in undirected weight graphiAs a chance event, stochastic variable collection S={ si| i ∈ V } it is fixed Justice is the Markov random field on set V with regard to neighborhood system Ω;Based on priori position binding characteristic and local feature information, Set up ground floor S in multilayer saliency map0In each super-pixel significance penalty:
Wherein, Represent S0Middle region i corresponding notable angle value, V in l layer saliency mapslRepresent SlIt is right The vertex set of the graph model answered, parameter sets of the θ for needed for building original notable figure, including λ, xp、kiThree parameters, λ are weighting Value, xpFor the priori position assumed, kiFor the super-pixel number of regions that over-segmentation is obtained;
Step 4.1.2:Set up the significance penalty of summit interaction between layers in multilayer saliency map:
That is the significance of l layers summit i is changed into the cost needed for the significance of the l+1 layers summit j after merging;WhereinShow that edge energy item only calculates the contrast of intersection Degree,The neighborhood local contrast of respectively l layers region i and l+1 layers region p,For l layers region i and l+1 The difference of the priori position constraint of layer region p, is weighed using Euclidean distance, and balance factor β is used for balancing the shadow of two cost items Ring;
Step 4.1.3:The Energy minimization solution minimum cost function that figure cuts is finally based on, the significance of optimum is obtained Figure.
7. the speed limit road traffic sign detection of robust as claimed in claim 1 and recognition methods, it is characterised in that the tool of step 5 Body step includes:
Step 5.1:Super-pixel segmentation is carried out to the training sample chosen using undue segmentation method, each training sample is obtained Super-pixel figure, randomly selects super-pixel that ratio is 10%~30% in each training sample as in the training sample The heart, is extended filling with the average pixel value on the training sample border to super-pixel figure, obtains the ROI obtained with step 4 Image size identical blank map picture;
Step 5.2:Filling image tag is determined according to area Duplication of the blank map picture with training sample, when Duplication is more than When 50%, the label of blank map picture is 1;When Duplication is less than 50%, its label is 0;
Step 5.3:CNN models are sent into the blank map picture through step 5.2 to be trained, the CNN models for training are obtained.
8. the speed limit road traffic sign detection of robust as claimed in claim 1 and recognition methods, it is characterised in that the tool of step 6 Body step is:For target, the ROI image that step 4 is obtained is classified using the CaffeNet models after training, by point Class device provides final classification results.
CN201610810614.5A 2016-09-08 2016-09-08 A kind of speed limit road traffic sign detection of robust and recognition methods Active CN106529391B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610810614.5A CN106529391B (en) 2016-09-08 2016-09-08 A kind of speed limit road traffic sign detection of robust and recognition methods

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610810614.5A CN106529391B (en) 2016-09-08 2016-09-08 A kind of speed limit road traffic sign detection of robust and recognition methods

Publications (2)

Publication Number Publication Date
CN106529391A true CN106529391A (en) 2017-03-22
CN106529391B CN106529391B (en) 2019-06-18

Family

ID=58343556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610810614.5A Active CN106529391B (en) 2016-09-08 2016-09-08 A kind of speed limit road traffic sign detection of robust and recognition methods

Country Status (1)

Country Link
CN (1) CN106529391B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909059A (en) * 2017-11-30 2018-04-13 中南大学 It is a kind of towards cooperateing with complicated City scenarios the traffic mark board of bionical vision to detect and recognition methods
CN108898078A (en) * 2018-06-15 2018-11-27 上海理工大学 A kind of traffic sign real-time detection recognition methods of multiple dimensioned deconvolution neural network
CN111383473A (en) * 2018-12-29 2020-07-07 安波福电子(苏州)有限公司 Self-adaptive cruise system based on traffic sign speed limit indication
CN116978233A (en) * 2023-09-22 2023-10-31 深圳市城市交通规划设计研究中心股份有限公司 Active variable speed limiting method for accident-prone region

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226820A (en) * 2013-04-17 2013-07-31 南京理工大学 Improved two-dimensional maximum entropy division night vision image fusion target detection algorithm
CN104462502A (en) * 2014-12-19 2015-03-25 中国科学院深圳先进技术研究院 Image retrieval method based on feature fusion
CN105260737A (en) * 2015-11-25 2016-01-20 武汉大学 Automatic laser scanning data physical plane extraction method with multi-scale characteristics fused
CN105868774A (en) * 2016-03-24 2016-08-17 西安电子科技大学 Selective search and convolutional neural network based vehicle logo recognition method
CN105930868A (en) * 2016-04-20 2016-09-07 北京航空航天大学 Low-resolution airport target detection method based on hierarchical reinforcement learning

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103226820A (en) * 2013-04-17 2013-07-31 南京理工大学 Improved two-dimensional maximum entropy division night vision image fusion target detection algorithm
CN104462502A (en) * 2014-12-19 2015-03-25 中国科学院深圳先进技术研究院 Image retrieval method based on feature fusion
CN105260737A (en) * 2015-11-25 2016-01-20 武汉大学 Automatic laser scanning data physical plane extraction method with multi-scale characteristics fused
CN105868774A (en) * 2016-03-24 2016-08-17 西安电子科技大学 Selective search and convolutional neural network based vehicle logo recognition method
CN105930868A (en) * 2016-04-20 2016-09-07 北京航空航天大学 Low-resolution airport target detection method based on hierarchical reinforcement learning

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
LANG SUN等: "Visual saliency detection based on multi-scale and multi-channel mean", 《MULTIMEDIA TOOLS AND APPLICATIONS》 *
RONGQIANG QIAN 等: "Robust Chinese Traffic Sign Detection and Recognition with Deep Convolutional Neural Network", 《2015 11TH INTERNATIONAL CONFERENCE ON NATURAL COMPUTATION (ICNC)》 *
YUJUN ZENG等: "Traffic Sign Recognition Using Deep Convolutional Networks and Extreme Learning Machine", 《INTERNATIONAL CONFERENCE ON INTELLIGENT SCIENCE AND BIG DATA ENGINEERING》 *
刘占文 等: "基于视觉注意机制的弱对比度下", 《中国公路学报》 *
胡正平 等: "卷积神经网络分类模型在模式识别中的新进展", 《燕山大学学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107909059A (en) * 2017-11-30 2018-04-13 中南大学 It is a kind of towards cooperateing with complicated City scenarios the traffic mark board of bionical vision to detect and recognition methods
CN108898078A (en) * 2018-06-15 2018-11-27 上海理工大学 A kind of traffic sign real-time detection recognition methods of multiple dimensioned deconvolution neural network
CN111383473A (en) * 2018-12-29 2020-07-07 安波福电子(苏州)有限公司 Self-adaptive cruise system based on traffic sign speed limit indication
CN111383473B (en) * 2018-12-29 2022-02-08 安波福电子(苏州)有限公司 Self-adaptive cruise system based on traffic sign speed limit indication
CN116978233A (en) * 2023-09-22 2023-10-31 深圳市城市交通规划设计研究中心股份有限公司 Active variable speed limiting method for accident-prone region
CN116978233B (en) * 2023-09-22 2023-12-26 深圳市城市交通规划设计研究中心股份有限公司 Active variable speed limiting method for accident-prone region

Also Published As

Publication number Publication date
CN106529391B (en) 2019-06-18

Similar Documents

Publication Publication Date Title
CN110796168B (en) Vehicle detection method based on improved YOLOv3
CN109919072B (en) Fine vehicle type recognition and flow statistics method based on deep learning and trajectory tracking
CN105160309B (en) Three lanes detection method based on morphological image segmentation and region growing
CN108985194B (en) Intelligent vehicle travelable area identification method based on image semantic segmentation
CN109932730B (en) Laser radar target detection method based on multi-scale monopole three-dimensional detection network
CN109325418A (en) Based on pedestrian recognition method under the road traffic environment for improving YOLOv3
CN102708356B (en) Automatic license plate positioning and recognition method based on complex background
CN109447033A (en) Vehicle front obstacle detection method based on YOLO
CN111553201B (en) Traffic light detection method based on YOLOv3 optimization algorithm
CN107885764A (en) Based on the quick Hash vehicle retrieval method of multitask deep learning
CN110020651A (en) Car plate detection localization method based on deep learning network
CN107729801A (en) A kind of vehicle color identifying system based on multitask depth convolutional neural networks
CN106096607A (en) A kind of licence plate recognition method
CN105809121A (en) Multi-characteristic synergic traffic sign detection and identification method
CN113486764B (en) Pothole detection method based on improved YOLOv3
CN104573685A (en) Natural scene text detecting method based on extraction of linear structures
CN108280397A (en) Human body image hair detection method based on depth convolutional neural networks
CN113420607A (en) Multi-scale target detection and identification method for unmanned aerial vehicle
Fan et al. Real-time object detection for lidar based on ls-r-yolov4 neural network
CN106529391A (en) Robust speed-limit traffic sign detection and recognition method
CN110378239A (en) A kind of real-time traffic marker detection method based on deep learning
CN103413145A (en) Articulation point positioning method based on depth image
CN105005989A (en) Vehicle target segmentation method under weak contrast
CN104537353A (en) Three-dimensional face age classifying device and method based on three-dimensional point cloud
CN106845458A (en) A kind of rapid transit label detection method of the learning machine that transfinited based on core

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant