CN104517103A - Traffic sign classification method based on deep neural network - Google Patents
Traffic sign classification method based on deep neural network Download PDFInfo
- Publication number
- CN104517103A CN104517103A CN201410841539.XA CN201410841539A CN104517103A CN 104517103 A CN104517103 A CN 104517103A CN 201410841539 A CN201410841539 A CN 201410841539A CN 104517103 A CN104517103 A CN 104517103A
- Authority
- CN
- China
- Prior art keywords
- traffic sign
- neural network
- layer
- deep neural
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/582—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
- G06F18/2111—Selection of the most significant subset of features by using evolutionary computational techniques, e.g. genetic algorithms
Abstract
The invention discloses a traffic sign classification method based on a deep neural network. The traffic sign classification method comprises the following steps: A, detecting a read-in video based on a moving object detection method of a light stream method, and when a moving object is detected, extracting a region of interest; B, utilizing blocks with fixed sizes to carry out blocked processing on the extracted region of interest; C, carrying out zooming processing on pictures after the blocked processing, and converting into pictures with the same size; D, inputting the converted pictures, and utilizing a convolutional neural network for classification. According to the method, the region of interest is extracted for the pictures after motion detection and then the blocked processing is carried out, and after the obtained pictures are converted into the pictures with the same size, processing is carried out by utilizing the convolutional neural network, so that the problems caused by an artificially assumed class conditional density function are avoided, the testing speed is greatly quickened, and the precision is greatly improved. The traffic sign classification method based on the deep neural network disclosed by the invention can be widely applied to the traffic field.
Description
Technical field
The present invention relates to field of traffic, especially a kind of traffic sign sorting technique based on deep neural network.
Background technology
Along with the progress of urbanization and the universal of automobile, vehicles number rolls up, and congested in traffic aggravation, traffic hazard takes place frequently, and the safety of highway communication and conevying efficiency problem become and become increasingly conspicuous.And be one of important measures of transport solution safety and conevying efficiency problem based on the driver assistance system of computer vision, in intelligent transportation system, obtain application gradually, its research is roughly carried out in road Identification, collision recognition, Traffic Sign Recognition etc. three.In road Identification, collision recognition, research comparatively early, also obtains many good results; But research is less in Traffic Sign Recognition, owing to comprising many important transport information in traffic sign, as driven the information such as change, speed restriction, driving behavior restriction of road ahead situation, there is provided these information to be conducive to driver to driver to react in good time in good time, ensure driving safety, the generation avoided traffic accident, has great importance.
From the national standard of traffic sign, following priori can be obtained: traffic sign can be classified according to its color: traffic sign generally can be divided into three classes such as warning, ban, instruction and fingerpost, and every class traffic sign has different colors.Character, numeral, geometric pattern etc. that the shape of traffic sign, size and inside comprise have regulation in a standard.Traffic sign is arranged on the right of road usually, the position of distance roadside 2 ~ 4.5m.In a word, utilize this knowledge, can search volume be reduced, greatly accelerate the processing speed of Traffic Sign Recognition.
The difficult point of Traffic Sign Recognition: Traffic Sign Recognition absorbs Traffic Sign Images in outdoor natural scene by the video camera be arranged on automobile, input computing machine carries out having processed, it does not have more challenge than the target will under general non-natural scene, and reason is the recognition effect and the execution efficiency that there is various factors traffic sign in natural scene: in (1) outdoor natural scene illumination condition be change and uncontrollable; (2) due to motion and the vibrations of traffic sign, the image of traffic sign is produced fuzzy; (3) traffic mark board is placed in open air, causes damaging due to the impact of weather condition, coating and drawing at random and dust; (4) although traffic sign is manufactured with international standard, what various countries performed is national national standard, thus can not by international standard as the Sample Storehouse of classifying; (5) identification of traffic sign must be able to be applied in real time environment.
In the last few years, a large amount of research institutions of China, school also participated in Traffic Sign Recognition all one after another and studied in the middle of this field, and also achieved certain achievement in research.Such as:
1, a kind of traffic sign recognition method based on SURF of proposing of the people such as Yang Haidong of Guangdong University of Technology and system (CN103544484A), the method increase the efficiency of Traffic Sign Recognition;
2, the outdoor traffic sign identification method (CN102881160A) under a kind of low-illumination scene of proposing of the people such as Cai Nian, Liang Wenzhao of Guangdong University of Technology, this invention is the outdoor traffic sign identification method that a kind of robustness is comparatively strong, accuracy rate is higher;
3, a kind of traffic sign recognition method (CN102799859A) of proposing of Yuan Xue, Zhang Hui etc. of Beijing Jiaotong University, the method of this invention not only remains the unchangeability of advantage SIFT feature has to(for) graphical rule change and rotation, and make the characteristic quantity of extraction be more convenient for differentiating color and locus feature, for rich color and the different Traffic Sign Recognition of locus changes in distribution very effective;
4, a kind of method (CN102024152A) of carrying out Traffic Sign Recognition based on sparse expression and dictionary learning of proposing of Wang Donghui, Deng Xiao etc. of Zhejiang University, this invention utilizes sparse expression and probabilistic method to realize the classification of traffic sign picture, reaches higher Traffic Sign Recognition rate;
5, a kind of layering traffic sign recognition method (CN103390167A) of multiple features that proposes of the people such as Sun Rui, Wang Jizhen of Saic Chery Automobile Co., Ltd, accuracy rate in Traffic Sign Recognition is not high, the problem of poor real by solving based on the method for detecting of color for the method.
In a word, in prior art, Traffic Sign Recognition generally comprises detecting and classification two modules, reconnaissance phase is generally utilize the color of traffic sign or shape facility to detect the region that may comprise traffic sign, then size regularization is carried out in interested region, judge the validity in traffic sign region further at sorting phase and identify the implication of traffic sign.
Method for detecting can be divided into detecting two class of detecting based on color and Shape-based interpolation.Method for detecting based on color: colouring information has size and unchanged view angle, and have stronger separability, therefore colouring information is very important for the detecting of traffic sign, in nearly all Traffic Sign Recognition System, all make use of colouring information.Method for detecting based on color is method for detecting the most basic, and it, by splitting traffic sign typical color in the image absorbed, detects interested region.Can be divided three classes again in these class methods:
(1) colored thresholding method: in this kind of algorithm, the selection of color space is very important.Be select rgb space the most intuitively, directly split by the threshold value of setting.
(2) based on the method for neural network learning.For overcoming the impact of the non-linear of space transforming and noise, the method based on neural network learning can be adopted.These class methods, owing to adopting off-line training, are detected online, and real-time is better, and has certain generalization ability, can reduce the impact of noise; But shortcoming is that the selection of the structure of neural network, the number of hidden joint and the number of plies all relies on the representativeness of training set, and the database that foundation comprises various situation is not easy thing.
(3) method of view-based access control model model.For overcoming the impact of various visual condition, by vision mode, traffic sign being detected, in numerous items, also obtaining application.This kind of method based on model considers human vision characteristics and environmental baseline, has certain effect, but will environmentally condition determination parameter during application, comparatively complicated, and for covering, the situation such as stained of traffic sign considers less.The method of Shape-based interpolation: although have the advantages that directly focus on based on the method for detecting of color, but owing to being subject to the impact such as illumination and Changes in weather, only rely on colouring information can not go out the region of traffic sign by accurate detection, and the method utilizing the Shape-based interpolation of image gradient grown up from the scene analysis of robotics, Three-dimension object recognition, positioning parts research CAD database, by the impact of illumination, can not obtain in traffic sign detecting research and pay attention to.Based on the combination of color and these two kinds of methods of Shape-based interpolation, it is the most suitable method of traffic sign detecting research.At present, the method for detecting of most of Shape-based interpolation is all be based upon on the method for detecting based on color.In Traffic Sign Recognition, the method for Shape-based interpolation can be divided into again based on edge contour method with based on template matching method.The most basic method based on edge contour method, there is the edge extracting method of multiple maturation available at present, the edge extracted is analyzed again, but the shortcoming of said method is that the precision of traffic sign classification aspect and detection speed are difficult to be taken into account.
Summary of the invention
In order to solve the problems of the technologies described above, the object of the invention is: the traffic sign sorting technique a kind of high precision based on deep neural network being provided and detecting fast.
The technical solution adopted in the present invention is: a kind of traffic sign sorting technique based on deep neural network, includes following steps:
A, based on the moving target detecting method of optical flow approach, the video read in being detected, when moving object having been detected, extracting area-of-interest;
B, the block of fixed size is utilized to carry out piecemeal process to the area-of-interest extracted;
C, convergent-divergent process is carried out to the picture after piecemeal process, convert onesize picture to;
D, using conversion after picture as input, utilize convolutional neural networks to classify.
Further, described step B is specially:
B1, utilize the block of fixed size to carry out piecemeal to the area-of-interest extracted to obtain piecemeal picture;
B2, utilize the block of fixed size to move a pixel after, piecemeal is carried out to the area-of-interest extracted and obtains piecemeal picture;
B3, repeated execution of steps B2 obtain multiple piecemeal picture.
Further, in described step B, the size of the block of fixed size is the value of N × N, N is 50-70.
Further, after changing in described step C, the size of picture is 32 × 32.
Further, the convolutional neural networks in described step D includes 7 layers, is followed successively by first volume lamination, the first down-sampling layer, volume Two lamination, the second down-sampling layer, the 3rd convolutional layer, proper vector layer and output layer.
Further, described first volume lamination includes the characteristic pattern of 6 28 × 28 sizes, described first down-sampling layer includes the characteristic pattern of 6 14 × 14 sizes, described volume Two lamination includes the characteristic pattern of 16 10 × 10 sizes, described second down-sampling layer includes the characteristic pattern of 16 5 × 5 sizes, and described 3rd convolutional layer includes 300 neurons.
Further, described output layer includes 43 labels, and 300 neurons of described 3rd convolutional layer are connected entirely with each label of output layer.
The invention has the beneficial effects as follows: the inventive method mentions area-of-interest to the image after motion detects, and then carry out piecemeal process, and the picture obtained is converted to onesize after utilize convolutional neural networks to process, avoid the problem that artificial hypothesis class conditional density function brings, accelerate test speed dramatically, improve precision.
Accompanying drawing explanation
Fig. 1 is the flow chart of steps of the inventive method;
Fig. 2 is the layering schematic diagram of neural network in the inventive method;
Fig. 3 is convolution process schematic diagram in the inventive method.
Embodiment
Below in conjunction with accompanying drawing, the specific embodiment of the present invention is described further:
With reference to Fig. 1, a kind of traffic sign sorting technique based on deep neural network, includes following steps:
A, based on the moving target detecting method of optical flow approach, the video read in being detected, when moving object having been detected, extracting area-of-interest;
B, the block of fixed size is utilized to carry out piecemeal process to the area-of-interest extracted;
C, convergent-divergent process is carried out to the picture after piecemeal process, convert onesize picture to;
D, using conversion after picture as input, utilize convolutional neural networks to classify.
Convolutional neural networks (CNN, Convolutional Neural Networks) is the one of artificial neural network, has become the study hotspot of current speech analysis and field of image recognition.Its weights shared network structure makes it more to be similar to biological neural network, reduces the complexity of network model, decreases the quantity of weights.Its network structure as shown in Figure 2.
Be further used as preferred embodiment, described step B is specially:
B1, utilize the block of fixed size to carry out piecemeal to the area-of-interest extracted to obtain piecemeal picture;
B2, utilize the block of fixed size to move a pixel after, piecemeal is carried out to the area-of-interest extracted and obtains piecemeal picture;
B3, repeated execution of steps B2 obtain multiple piecemeal picture.
Be further used as preferred embodiment, in described step B, the size of the block of fixed size is the value of N × N, N is 50-70.
Be further used as preferred embodiment, after changing in described step C, the size of picture is 32 × 32.
With reference to Fig. 2, be further used as preferred embodiment, convolutional neural networks in described step D includes 7 layers, is followed successively by first volume lamination C1, the first down-sampling layer S2, volume Two lamination C3, the second down-sampling layer S4, the 3rd convolutional layer C5, proper vector layer F6(Fig. 2 and does not mark) and output layer output.
With reference to Fig. 3, its convolution process comprises: with a trainable wave filter f
xdeconvolute an image inputted (first stage is the image inputted, and the stage has below been exactly convolution characteristic pattern), then adds a biased b
x, obtain convolutional layer C
x.Sub-sampling procedures comprises: four the pixel summations of every neighborhood become a pixel, then by scalar W
x+1weighting, then increase biased b
x+1, then by a sigmoid activation function, produce the Feature Mapping figure S that is probably reduced four times
x+1.Do convolution algorithm so can be regarded as from a plane to the mapping of next plane, down-sampling layer can regard fuzzy filter as, plays the effect of Further Feature Extraction.Between hidden layer and hidden layer, spatial resolution is successively decreased, and the number of planes contained by every layer increases progressively, and can be used for like this detecting more characteristic information.
Each neuron (i.e. each pixel) in perception input picture is removed with the convolution kernel of a fixed size, characteristic pattern is produced at C1 layer after convolution, then four pixels often organized in characteristic pattern are sued for peace again, weighted value, be biased, obtained the characteristic pattern of S2 layer by a sigmoid function, these characteristic patterns obtain C3 layer through convolution again.This hierarchical structure is the same with S2 again produces S4.Each characteristic pattern of S4 layer is connected with each neuron in convolutional layer C5, the generation of over-fitting can be prevented like this.Finally, these pixel values are rasterized at proper vector layer F6, and connect into a vector and be input to traditional neural network, exported.
Usually, C layer is feature extraction layer, i.e. convolutional layer, and the convolution kernel be made up of weights with removes each characteristic pattern of one deck before perception, and this has just extracted the feature of image, and generates the characteristic pattern of this convolutional layer; S layer is down-sampling layer, and each computation layer of network is made up of multiple Feature Mapping, and each Feature Mapping is a plane, and in plane, all neuronic weights are equal.Feature Mapping structure adopts affects the little sigmoid function of kernel function as the activation function of convolutional network, makes Feature Mapping have shift invariant.Especially the convolution kernel used at every one deck is duplicate, so just reaches the effect that weights are shared, the complexity of whole network is reduced greatly.
In the present invention, convolutional neural networks has 7 layers (not comprising input layer), every layer all comprise can training parameter (i.e. connection weight), and each layer has multiple characteristic pattern, each characteristic pattern extracts a kind of feature of input by a kind of convolution kernel, and then each characteristic pattern has multiple neuron.In the present invention, setting input picture is 32 × 32 sizes.
C1 layer is a convolutional layer, is made up of 6 characteristic patterns.In characteristic pattern each neuron with input in 5 × 5 neighborhood be connected.The size of characteristic pattern be 28 × 28, C1 layer have (28 × 28+1) × 6=4710 can training parameter (weights and bias), have 5 × 5 × 6 × 32 × 32=153600 with input layer and be connected.
S2 layer is a down-sampling layer, has the characteristic pattern of 6 14 × 14 sizes.Each unit in characteristic pattern is connected with 2 × 2 neighborhoods of corresponding characteristic pattern in C1 layer.4 of each unit of S2 layer inputs are added, and being multiplied by one can training parameter, adds one and can train biased.Result is calculated by sigmoid function.Coefficient and the biased nonlinear degree that control sigmoid function can be trained.2 × 2 receptive fields of each unit are not overlapping, and therefore in S2 layer, the size of each characteristic pattern is the 1/4(row and column each 1/2 of characteristic pattern size in C1 layer).S2 layer has (14 × 14+1) × 6=1020 can training parameter, has 6 × 28 × 28 × 5 × 5=117600 to be connected with C1 layer.
C3 layer is also a convolutional layer, and it to be deconvoluted a layer S2 by the convolution kernel of 5x5 equally, and the characteristic pattern then obtained just only has 10 × 10 neurons, the corresponding a kind of convolution kernel of each characteristic pattern, so it has 16 kinds of different convolution kernels.Here should be noted that a bit: each characteristic pattern in C3 is all 6 or the several characteristic patterns that are connected in S2, represents that the characteristic pattern of this layer is the various combination of the characteristic pattern that last layer extracts.
S4 layer is a down-sampling layer, is made up of the characteristic pattern of 16 5 × 5 sizes.Each unit in characteristic pattern is connected with 2 × 2 neighborhoods of individual features figure in C3, the same with the connection between C1 and S2.S4 layer has 16 × 5 × 5+16=416 can training parameter, and its and C3 layer one have that 10 × 10 × 5 × 5 × 16=65000 is individual to be connected.
Finally, S4 layer is connected entirely with convolutional layer, and which floor is made up of neuron one by one this volume, and this experiment 100 neurons, each characteristic pattern in S4 layer is connected entirely with each neuron of this convolutional layer.Finally, be entirely connected by 300 of convolutional layer C5 each labels of neuron and output layer, the object adding a convolutional layer is, prevents the situation of over-fitting from occurring.Export finally by output layer and obtain H
w,b(X).
Be further used as preferred embodiment, described first volume lamination includes the characteristic pattern of 6 28 × 28 sizes, described first down-sampling layer includes the characteristic pattern of 6 14 × 14 sizes, described volume Two lamination includes the characteristic pattern of 16 10 × 10 sizes, described second down-sampling layer includes the characteristic pattern of 16 5 × 5 sizes, and described 3rd convolutional layer includes 300 neurons.
Be further used as preferred embodiment, described output layer includes 43 labels, and 300 neurons of described 3rd convolutional layer are connected entirely with each label of output layer.
Convolutional neural networks of the present invention mainly comprises two ingredients: training process, test process.
The main flow that neural network is used for pattern-recognition is supervised learning, and unsupervised learning is more for cluster analysis.For the pattern-recognition having supervision, classification due to arbitrary sample is known, the distribution of sample in space is no longer divide according to its NATURAL DISTRIBUTION tendency, but a kind of suitable space-division method to be looked for according to the separation degree between the distribution of similar sample in space and inhomogeneity sample, or find a classification boundaries, inhomogeneity sample is laid respectively in different regions.This just needs a long-time and learning process for complexity, and constantly adjustment is in order to divide the position of the classification boundaries of sample space, and the least possible sample is divided in non-homogeneous region.
Convolutional network is a kind of mapping being input to output in itself, it can learn the mapping relations between a large amount of constrained input, and without any need for the accurate mathematical expression formula between input and output, as long as trained convolutional network by known pattern, network just has the mapping ability between inputoutput pair.What convolutional network performed is Training, thus its sample set be by shape as: the vector of (input vector, desirable output vector) is to forming.All these vectors are right, should be all to derive from the actual " RUN " result that network is about to the system of simulation.They can gather from actual motion system.Before starting training, all power all should carry out initialization by some different little random numbers, the random number of such as distribution between [0,1]." little random number " is used for ensureing that network can not enter state of saturation because weights are excessive, thus causes failure to train; " difference " is used for ensureing that network can normally learn.In fact, if with identical several deinitialization weight matrixs, then have symmetry, cause the convolution kernel of every one deck all identical, then network impotentia study.
Training algorithm and traditional BP algorithm similar.Mainly comprise 4 steps, this 4 step is divided into two stages:
First stage, the forward direction stage:
A) from sample set, get sample (X, a Y
p), X is inputted network;
B) corresponding actual output O is calculated
p.
In this stage, information through conversion step by step, is sent to output layer from input layer.This process is also the process that network performs during normal operation after completing training.In the process, what network performed is calculate (be in fact exactly input to be multiplied with the convolution kernel of every layer, obtain last Output rusults): O
p=Fn(... (F2(F1(XpW(1)) W(2)) ...) W(n))
Subordinate phase, the back-propagation stage:
A) calculate cost function, that is: J (W, b)=1/2 × || O
p-Y
p||
2;
B) by the method backpropagation adjustment weight matrix of minimization error.
In the present invention, first training process is collect sample, and the present invention collects 300,000 samples, wherein 50,000 speed(-)limit sign pictures, 50000 other prohibitory sign pictures, 50,000 cancel a ban mark, 50,000 Warning Mark pictures, other indicate 50,000, picture, and 50,000 hazard identification pictures; Then classified through convolutional neural networks by this 300,000 pictures, obtain label result, namely comprise: speed(-)limit sign class, other prohibitory sign classes, the mark class that cancels a ban, Warning Mark class, hazard identification class, other indicate class, one has 43 labels.
Test process is then used to test precision that the neural network that uses classifies for traffic sign, whether speed is reliable.Its process comprises: read in video image, carry out moving object detection, image carried out to piecemeal, sorter classification, draw testing result.
More than that better enforcement of the present invention is illustrated, but the invention is not limited to described embodiment, those of ordinary skill in the art can also make all equivalents or replacement under the prerequisite without prejudice to spirit of the present invention, and these equivalent distortion or replacement are all included in the application's claim limited range.
Claims (7)
1., based on a traffic sign sorting technique for deep neural network, it is characterized in that: include following steps:
A, based on the moving target detecting method of optical flow approach, the video read in being detected, when moving object having been detected, extracting area-of-interest;
B, the block of fixed size is utilized to carry out piecemeal process to the area-of-interest extracted;
C, convergent-divergent process is carried out to the picture after piecemeal process, convert onesize picture to;
D, using conversion after picture as input, utilize convolutional neural networks to classify.
2. a kind of traffic sign sorting technique based on deep neural network according to claim 1, is characterized in that: described step B is specially:
B1, utilize the block of fixed size to carry out piecemeal to the area-of-interest extracted to obtain piecemeal picture;
B2, utilize the block of fixed size to move a pixel after, piecemeal is carried out to the area-of-interest extracted and obtains piecemeal picture;
B3, repeated execution of steps B2 obtain multiple piecemeal picture.
3. a kind of traffic sign sorting technique based on deep neural network according to claim 1 and 2, is characterized in that: in described step B, the size of the block of fixed size is the value of N × N, N is 50-70.
4. a kind of traffic sign sorting technique based on deep neural network according to claim 1, is characterized in that: after changing in described step C, the size of picture is 32 × 32.
5. a kind of traffic sign sorting technique based on deep neural network according to claim 1, it is characterized in that: the convolutional neural networks in described step D includes 7 layers, be followed successively by first volume lamination, the first down-sampling layer, volume Two lamination, the second down-sampling layer, the 3rd convolutional layer, proper vector layer and output layer.
6. a kind of traffic sign sorting technique based on deep neural network according to claim 5, it is characterized in that: described first volume lamination includes the characteristic pattern of 6 28 × 28 sizes, described first down-sampling layer includes the characteristic pattern of 6 14 × 14 sizes, described volume Two lamination includes the characteristic pattern of 16 10 × 10 sizes, described second down-sampling layer includes the characteristic pattern of 16 5 × 5 sizes, and described 3rd convolutional layer includes 300 neurons.
7. a kind of traffic sign sorting technique based on deep neural network according to claim 6, is characterized in that: described output layer includes 43 labels, and 300 neurons of described 3rd convolutional layer are connected entirely with each label of output layer.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410841539.XA CN104517103A (en) | 2014-12-26 | 2014-12-26 | Traffic sign classification method based on deep neural network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201410841539.XA CN104517103A (en) | 2014-12-26 | 2014-12-26 | Traffic sign classification method based on deep neural network |
Publications (1)
Publication Number | Publication Date |
---|---|
CN104517103A true CN104517103A (en) | 2015-04-15 |
Family
ID=52792377
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201410841539.XA Pending CN104517103A (en) | 2014-12-26 | 2014-12-26 | Traffic sign classification method based on deep neural network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN104517103A (en) |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104850845A (en) * | 2015-05-30 | 2015-08-19 | 大连理工大学 | Traffic sign recognition method based on asymmetric convolution neural network |
CN105488534A (en) * | 2015-12-04 | 2016-04-13 | 中国科学院深圳先进技术研究院 | Method, device and system for deeply analyzing traffic scene |
CN105551036A (en) * | 2015-12-10 | 2016-05-04 | 中国科学院深圳先进技术研究院 | Training method and device for deep learning network |
CN105550701A (en) * | 2015-12-09 | 2016-05-04 | 福州华鹰重工机械有限公司 | Real-time image extraction and recognition method and device |
CN105809138A (en) * | 2016-03-15 | 2016-07-27 | 武汉大学 | Road warning mark detection and recognition method based on block recognition |
CN105930830A (en) * | 2016-05-18 | 2016-09-07 | 大连理工大学 | Road surface traffic sign recognition method based on convolution neural network |
CN105956608A (en) * | 2016-04-21 | 2016-09-21 | 恩泊泰(天津)科技有限公司 | Objective positioning and classifying algorithm based on deep learning |
CN106372571A (en) * | 2016-08-18 | 2017-02-01 | 宁波傲视智绘光电科技有限公司 | Road traffic sign detection and identification method |
CN106682696A (en) * | 2016-12-29 | 2017-05-17 | 华中科技大学 | Multi-example detection network based on refining of online example classifier and training method thereof |
CN106844524A (en) * | 2016-12-29 | 2017-06-13 | 北京工业大学 | A kind of medical image search method converted based on deep learning and Radon |
CN107016521A (en) * | 2017-04-26 | 2017-08-04 | 国家电网公司 | A kind of warehouse nameplate recognition methods based on image convolution nerual network technique |
CN107085733A (en) * | 2017-05-15 | 2017-08-22 | 山东工商学院 | Offshore infrared ship recognition methods based on CNN deep learnings |
CN107220643A (en) * | 2017-04-12 | 2017-09-29 | 广东工业大学 | The Traffic Sign Recognition System of deep learning model based on neurological network |
CN107437110A (en) * | 2017-07-11 | 2017-12-05 | 中国科学院自动化研究所 | The piecemeal convolution optimization method and device of convolutional neural networks |
CN107742121A (en) * | 2017-10-23 | 2018-02-27 | 国网江苏省电力公司南通供电公司 | A kind of warehouse nameplate recognition methods based on image convolution nerual network technique |
CN107784315A (en) * | 2016-08-26 | 2018-03-09 | 深圳光启合众科技有限公司 | The recognition methods of destination object and device, and robot |
CN108154102A (en) * | 2017-12-21 | 2018-06-12 | 安徽师范大学 | A kind of traffic sign recognition method |
CN108268936A (en) * | 2018-01-17 | 2018-07-10 | 百度在线网络技术(北京)有限公司 | For storing the method and apparatus of convolutional neural networks |
CN108475331A (en) * | 2016-02-17 | 2018-08-31 | 英特尔公司 | Use the candidate region for the image-region for including interested object of multiple layers of the characteristic spectrum from convolutional neural networks model |
CN109086753A (en) * | 2018-10-08 | 2018-12-25 | 新疆大学 | Traffic sign recognition method, device based on binary channels convolutional neural networks |
CN109146074A (en) * | 2017-06-28 | 2019-01-04 | 埃森哲环球解决方案有限公司 | Image object identification |
CN109271934A (en) * | 2018-06-19 | 2019-01-25 | Kpit技术有限责任公司 | System and method for Traffic Sign Recognition |
CN109492454A (en) * | 2017-09-11 | 2019-03-19 | 比亚迪股份有限公司 | Object identifying method and device |
US10262218B2 (en) | 2017-01-03 | 2019-04-16 | Qualcomm Incorporated | Simultaneous object detection and rigid transform estimation using neural network |
CN109766864A (en) * | 2019-01-21 | 2019-05-17 | 开易(北京)科技有限公司 | Image detecting method, image detection device and computer readable storage medium |
CN109815906A (en) * | 2019-01-25 | 2019-05-28 | 华中科技大学 | Method for traffic sign detection and system based on substep deep learning |
CN105930830B (en) * | 2016-05-18 | 2019-07-16 | 大连理工大学 | A kind of pavement marking recognition methods based on convolutional neural networks |
CN110019896A (en) * | 2017-07-28 | 2019-07-16 | 杭州海康威视数字技术股份有限公司 | A kind of image search method, device and electronic equipment |
CN110135307A (en) * | 2019-04-30 | 2019-08-16 | 北京邮电大学 | Method for traffic sign detection and device based on attention mechanism |
WO2020216227A1 (en) * | 2019-04-24 | 2020-10-29 | 华为技术有限公司 | Image classification method and apparatus, and data processing method and apparatus |
CN112347972A (en) * | 2020-11-18 | 2021-02-09 | 合肥湛达智能科技有限公司 | High-dynamic region-of-interest image processing method based on deep learning |
CN112784084A (en) * | 2019-11-08 | 2021-05-11 | 阿里巴巴集团控股有限公司 | Image processing method and device and electronic equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2026313A1 (en) * | 2007-08-17 | 2009-02-18 | MAGNETI MARELLI SISTEMI ELETTRONICI S.p.A. | A method and a system for the recognition of traffic signs with supplementary panels |
CN102024152A (en) * | 2010-12-14 | 2011-04-20 | 浙江大学 | Method for recognizing traffic sings based on sparse expression and dictionary study |
CN102881160A (en) * | 2012-07-18 | 2013-01-16 | 广东工业大学 | Outdoor traffic sign identification method under low-illumination scene |
CN103544484A (en) * | 2013-10-30 | 2014-01-29 | 广东工业大学 | Traffic sign identification method and system based on SURF |
CN104244113A (en) * | 2014-10-08 | 2014-12-24 | 中国科学院自动化研究所 | Method for generating video abstract on basis of deep learning technology |
-
2014
- 2014-12-26 CN CN201410841539.XA patent/CN104517103A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2026313A1 (en) * | 2007-08-17 | 2009-02-18 | MAGNETI MARELLI SISTEMI ELETTRONICI S.p.A. | A method and a system for the recognition of traffic signs with supplementary panels |
CN102024152A (en) * | 2010-12-14 | 2011-04-20 | 浙江大学 | Method for recognizing traffic sings based on sparse expression and dictionary study |
CN102881160A (en) * | 2012-07-18 | 2013-01-16 | 广东工业大学 | Outdoor traffic sign identification method under low-illumination scene |
CN103544484A (en) * | 2013-10-30 | 2014-01-29 | 广东工业大学 | Traffic sign identification method and system based on SURF |
CN104244113A (en) * | 2014-10-08 | 2014-12-24 | 中国科学院自动化研究所 | Method for generating video abstract on basis of deep learning technology |
Non-Patent Citations (1)
Title |
---|
杨斐: "交通标志识别方法设计", 《微计算机信息》 * |
Cited By (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104850845A (en) * | 2015-05-30 | 2015-08-19 | 大连理工大学 | Traffic sign recognition method based on asymmetric convolution neural network |
CN104850845B (en) * | 2015-05-30 | 2017-12-26 | 大连理工大学 | A kind of traffic sign recognition method based on asymmetric convolutional neural networks |
CN105488534A (en) * | 2015-12-04 | 2016-04-13 | 中国科学院深圳先进技术研究院 | Method, device and system for deeply analyzing traffic scene |
CN105550701A (en) * | 2015-12-09 | 2016-05-04 | 福州华鹰重工机械有限公司 | Real-time image extraction and recognition method and device |
CN105550701B (en) * | 2015-12-09 | 2018-11-06 | 福州华鹰重工机械有限公司 | Realtime graphic extracts recognition methods and device |
CN105551036A (en) * | 2015-12-10 | 2016-05-04 | 中国科学院深圳先进技术研究院 | Training method and device for deep learning network |
CN108475331B (en) * | 2016-02-17 | 2022-04-05 | 英特尔公司 | Method, apparatus, system and computer readable medium for object detection |
CN108475331A (en) * | 2016-02-17 | 2018-08-31 | 英特尔公司 | Use the candidate region for the image-region for including interested object of multiple layers of the characteristic spectrum from convolutional neural networks model |
US11244191B2 (en) | 2016-02-17 | 2022-02-08 | Intel Corporation | Region proposal for image regions that include objects of interest using feature maps from multiple layers of a convolutional neural network model |
CN105809138A (en) * | 2016-03-15 | 2016-07-27 | 武汉大学 | Road warning mark detection and recognition method based on block recognition |
CN105956608A (en) * | 2016-04-21 | 2016-09-21 | 恩泊泰(天津)科技有限公司 | Objective positioning and classifying algorithm based on deep learning |
CN105930830A (en) * | 2016-05-18 | 2016-09-07 | 大连理工大学 | Road surface traffic sign recognition method based on convolution neural network |
CN105930830B (en) * | 2016-05-18 | 2019-07-16 | 大连理工大学 | A kind of pavement marking recognition methods based on convolutional neural networks |
CN106372571A (en) * | 2016-08-18 | 2017-02-01 | 宁波傲视智绘光电科技有限公司 | Road traffic sign detection and identification method |
CN107784315A (en) * | 2016-08-26 | 2018-03-09 | 深圳光启合众科技有限公司 | The recognition methods of destination object and device, and robot |
CN106844524A (en) * | 2016-12-29 | 2017-06-13 | 北京工业大学 | A kind of medical image search method converted based on deep learning and Radon |
CN106682696B (en) * | 2016-12-29 | 2019-10-08 | 华中科技大学 | The more example detection networks and its training method refined based on online example classification device |
CN106844524B (en) * | 2016-12-29 | 2019-08-09 | 北京工业大学 | A kind of medical image search method converted based on deep learning and Radon |
CN106682696A (en) * | 2016-12-29 | 2017-05-17 | 华中科技大学 | Multi-example detection network based on refining of online example classifier and training method thereof |
US10262218B2 (en) | 2017-01-03 | 2019-04-16 | Qualcomm Incorporated | Simultaneous object detection and rigid transform estimation using neural network |
CN107220643A (en) * | 2017-04-12 | 2017-09-29 | 广东工业大学 | The Traffic Sign Recognition System of deep learning model based on neurological network |
CN107016521A (en) * | 2017-04-26 | 2017-08-04 | 国家电网公司 | A kind of warehouse nameplate recognition methods based on image convolution nerual network technique |
CN107085733A (en) * | 2017-05-15 | 2017-08-22 | 山东工商学院 | Offshore infrared ship recognition methods based on CNN deep learnings |
CN109146074A (en) * | 2017-06-28 | 2019-01-04 | 埃森哲环球解决方案有限公司 | Image object identification |
CN107437110B (en) * | 2017-07-11 | 2021-04-02 | 中国科学院自动化研究所 | Block convolution optimization method and device of convolutional neural network |
CN107437110A (en) * | 2017-07-11 | 2017-12-05 | 中国科学院自动化研究所 | The piecemeal convolution optimization method and device of convolutional neural networks |
CN110019896A (en) * | 2017-07-28 | 2019-07-16 | 杭州海康威视数字技术股份有限公司 | A kind of image search method, device and electronic equipment |
US11586664B2 (en) | 2017-07-28 | 2023-02-21 | Hangzhou Hikvision Digital Technology Co., Ltd. | Image retrieval method and apparatus, and electronic device |
CN109492454A (en) * | 2017-09-11 | 2019-03-19 | 比亚迪股份有限公司 | Object identifying method and device |
CN107742121A (en) * | 2017-10-23 | 2018-02-27 | 国网江苏省电力公司南通供电公司 | A kind of warehouse nameplate recognition methods based on image convolution nerual network technique |
CN108154102A (en) * | 2017-12-21 | 2018-06-12 | 安徽师范大学 | A kind of traffic sign recognition method |
CN108268936A (en) * | 2018-01-17 | 2018-07-10 | 百度在线网络技术(北京)有限公司 | For storing the method and apparatus of convolutional neural networks |
CN108268936B (en) * | 2018-01-17 | 2022-10-28 | 百度在线网络技术(北京)有限公司 | Method and apparatus for storing convolutional neural networks |
CN109271934B (en) * | 2018-06-19 | 2023-05-02 | Kpit技术有限责任公司 | System and method for traffic sign recognition |
CN109271934A (en) * | 2018-06-19 | 2019-01-25 | Kpit技术有限责任公司 | System and method for Traffic Sign Recognition |
CN109086753A (en) * | 2018-10-08 | 2018-12-25 | 新疆大学 | Traffic sign recognition method, device based on binary channels convolutional neural networks |
CN109086753B (en) * | 2018-10-08 | 2022-05-10 | 新疆大学 | Traffic sign identification method and device based on two-channel convolutional neural network |
CN109766864A (en) * | 2019-01-21 | 2019-05-17 | 开易(北京)科技有限公司 | Image detecting method, image detection device and computer readable storage medium |
CN109815906B (en) * | 2019-01-25 | 2021-04-06 | 华中科技大学 | Traffic sign detection method and system based on step-by-step deep learning |
CN109815906A (en) * | 2019-01-25 | 2019-05-28 | 华中科技大学 | Method for traffic sign detection and system based on substep deep learning |
WO2020216227A1 (en) * | 2019-04-24 | 2020-10-29 | 华为技术有限公司 | Image classification method and apparatus, and data processing method and apparatus |
CN110135307A (en) * | 2019-04-30 | 2019-08-16 | 北京邮电大学 | Method for traffic sign detection and device based on attention mechanism |
CN112784084A (en) * | 2019-11-08 | 2021-05-11 | 阿里巴巴集团控股有限公司 | Image processing method and device and electronic equipment |
CN112784084B (en) * | 2019-11-08 | 2024-01-26 | 阿里巴巴集团控股有限公司 | Image processing method and device and electronic equipment |
CN112347972A (en) * | 2020-11-18 | 2021-02-09 | 合肥湛达智能科技有限公司 | High-dynamic region-of-interest image processing method based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104517103A (en) | Traffic sign classification method based on deep neural network | |
CN110163187B (en) | F-RCNN-based remote traffic sign detection and identification method | |
CN109902806B (en) | Method for determining target bounding box of noise image based on convolutional neural network | |
Gao et al. | Object classification using CNN-based fusion of vision and LIDAR in autonomous vehicle environment | |
CN106599773B (en) | Deep learning image identification method and system for intelligent driving and terminal equipment | |
CN108062569B (en) | Unmanned vehicle driving decision method based on infrared and radar | |
CN112818903A (en) | Small sample remote sensing image target detection method based on meta-learning and cooperative attention | |
CN108171112A (en) | Vehicle identification and tracking based on convolutional neural networks | |
CN110263786B (en) | Road multi-target identification system and method based on feature dimension fusion | |
CN104299006A (en) | Vehicle license plate recognition method based on deep neural network | |
CN104504395A (en) | Method and system for achieving classification of pedestrians and vehicles based on neural network | |
Maungmai et al. | Vehicle classification with deep learning | |
CN109543632A (en) | A kind of deep layer network pedestrian detection method based on the guidance of shallow-layer Fusion Features | |
Cai et al. | Night-time vehicle detection algorithm based on visual saliency and deep learning | |
CN107545263A (en) | A kind of object detecting method and device | |
CN113095152B (en) | Regression-based lane line detection method and system | |
Yin et al. | Fusionlane: Multi-sensor fusion for lane marking semantic segmentation using deep neural networks | |
Tang et al. | Integrated feature pyramid network with feature aggregation for traffic sign detection | |
CN110599521A (en) | Method for generating trajectory prediction model of vulnerable road user and prediction method | |
CN114821014A (en) | Multi-mode and counterstudy-based multi-task target detection and identification method and device | |
CN110443155A (en) | A kind of visual aid identification and classification method based on convolutional neural networks | |
CN114241250A (en) | Cascade regression target detection method and device and computer readable storage medium | |
CN112084897A (en) | Rapid traffic large-scene vehicle target detection method of GS-SSD | |
CN113128476A (en) | Low-power consumption real-time helmet detection method based on computer vision target detection | |
Khellal et al. | Pedestrian classification and detection in far infrared images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20150415 |