CN110458136B - Traffic sign identification method, device and equipment - Google Patents
Traffic sign identification method, device and equipment Download PDFInfo
- Publication number
- CN110458136B CN110458136B CN201910764738.8A CN201910764738A CN110458136B CN 110458136 B CN110458136 B CN 110458136B CN 201910764738 A CN201910764738 A CN 201910764738A CN 110458136 B CN110458136 B CN 110458136B
- Authority
- CN
- China
- Prior art keywords
- traffic sign
- trained
- image
- neural network
- label
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/213—Feature extraction, e.g. by transforming the feature space; Summarisation; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/582—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Bioinformatics & Computational Biology (AREA)
- General Engineering & Computer Science (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Artificial Intelligence (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The application discloses a traffic sign identification method, a device and equipment, wherein the method comprises the following steps: acquiring a traffic sign image to be identified; inputting the traffic sign image to be recognized into a trained deep belief network model for feature extraction to obtain a first feature vector corresponding to the traffic sign image to be recognized; converting the first feature vector into a first pulse sequence; and inputting the first pulse sequence into the trained pulse neural network to obtain a recognition result output by the trained pulse neural network. The method adopts the mode that the deep belief network model is combined with the impulse neural network to identify the traffic sign, does not need to carry out artificial feature extraction, greatly reduces artificial intervention, improves the identification speed, improves the identification result by fully utilizing the advantages of the deep belief network model and the impulse neural network, and solves the technical problems of low accuracy and low speed of the existing traffic sign identification.
Description
Technical Field
The present application relates to the field of image recognition technologies, and in particular, to a method, an apparatus, and a device for recognizing a traffic sign.
Background
The traffic sign is an important source for obtaining information in the vehicle running process, has important significance for accurately and quickly identifying the traffic sign, guaranteeing traffic safety and traffic order, improving traffic efficiency and also has important significance for the current emerging unmanned research.
The traditional traffic sign identification method is to identify by using methods of image matching, feature extraction, classifier combination and the like, and has the problems of more human intervention, low identification accuracy and low speed.
Disclosure of Invention
The application provides a traffic sign identification method, a traffic sign identification device and traffic sign identification equipment, which are used for solving the technical problems of low accuracy and low speed of the existing traffic sign identification.
In view of the above, a first aspect of the present application provides a traffic sign identification method, including:
acquiring a traffic sign image to be identified;
inputting the traffic sign image to be recognized into a trained deep belief network model for feature extraction to obtain a first feature vector corresponding to the traffic sign image to be recognized;
converting the first feature vector into a first pulse sequence;
and inputting the first pulse sequence into a trained pulse neural network to obtain a recognition result output by the trained pulse neural network.
Preferably, the method further comprises the following steps:
acquiring a traffic sign image set to be trained;
inputting the traffic sign images to be trained in the traffic sign image set to be trained into a trained deep belief network model for feature extraction to obtain second feature vectors corresponding to the traffic sign images to be trained;
converting the second eigenvector based on time-lag phase coding to obtain a second pulse sequence;
inputting the second pulse sequence into a pulse neural network, and training the pulse neural network;
and calculating the recognition accuracy rate of the impulse neural network to the traffic sign image to be trained, and finishing training when the recognition accuracy rate is higher than a threshold value to obtain the trained impulse neural network.
Preferably, the method for extracting features of the traffic sign images to be trained in the traffic sign image set to be trained comprises the following steps of inputting the traffic sign images to be trained in a trained deep belief network model to obtain second feature vectors corresponding to the traffic sign images to be trained, and the method further comprises the following steps:
and preprocessing the traffic sign image to be trained.
Preferably, the pre-treatment comprises:
and carrying out size normalization processing on the traffic sign image to be trained based on a bilinear interpolation algorithm to obtain the normalized traffic sign image to be trained.
Preferably, the inputting the second pulse sequence into a spiking neural network, and the training the spiking neural network, include:
inputting the second pulse sequence into a pulse neural network, and training the pulse neural network based on a learning method of combining three-pulse STDP and threshold plasticity.
A second aspect of the present application provides a traffic sign recognition apparatus, including:
the first image acquisition module is used for acquiring a traffic sign image to be identified;
the first feature extraction module is used for inputting the traffic sign image to be recognized into a trained deep belief network model for feature extraction to obtain a first feature vector corresponding to the traffic sign image to be recognized;
a first conversion module for converting the first feature vector into a first pulse sequence;
and the recognition module is used for inputting the first pulse sequence into the trained pulse neural network and acquiring a recognition result output by the trained pulse neural network.
Preferably, the method further comprises the following steps:
the second image acquisition module is used for acquiring a traffic sign image set to be trained;
the second feature extraction module is used for inputting the traffic sign images to be trained in the traffic sign image set to be trained into the trained deep belief network model for feature extraction to obtain second feature vectors corresponding to the traffic sign images to be trained;
the second conversion module is used for converting the second characteristic vector based on time-lag phase coding to obtain a second pulse sequence;
the training module is used for inputting the second pulse sequence into a pulse neural network and training the pulse neural network;
and the calculation module is used for calculating the recognition accuracy of the impulse neural network on the traffic sign image to be trained, and when the recognition accuracy is higher than a threshold value, the training is completed to obtain the trained impulse neural network.
Preferably, the method further comprises the following steps:
and the preprocessing module is used for preprocessing the traffic sign image to be trained.
A third aspect of the present application provides a traffic sign recognition apparatus, comprising: the apparatus includes a processor and a memory;
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the traffic sign recognition method according to any one of the first aspect according to instructions in the program code.
A fourth aspect of the present application provides a computer-readable storage medium for storing program code for executing the traffic sign recognition method according to any one of the first aspects.
According to the technical scheme, the embodiment of the application has the following advantages:
the application provides a traffic sign identification method, which comprises the following steps: acquiring a traffic sign image to be identified; inputting the traffic sign image to be recognized into a trained deep belief network model for feature extraction to obtain a first feature vector corresponding to the traffic sign image to be recognized; converting the first feature vector into a first pulse sequence; and inputting the first pulse sequence into the trained pulse neural network to obtain a recognition result output by the trained pulse neural network. The method and the device have the advantages that the trained deep belief network model is used for carrying out feature extraction on the traffic sign image, manual feature extraction is not needed, feature dimension reduction and feature selection are carried out on the input traffic sign image through the deep belief network model, manual intervention is greatly reduced, high-level features are extracted from the original traffic sign image, redundant noise information is screened out, the method and the device are beneficial to improving the recognition result of a subsequent impulse neural network, the deep belief network model and the impulse neural network are combined for carrying out traffic sign recognition, the recognition speed is improved, the advantages of the deep belief network model and the impulse neural network are fully utilized, the recognition result is improved, and the technical problems of low accuracy and low speed of the existing traffic sign recognition are solved.
Drawings
FIG. 1 is a schematic flow chart diagram illustrating an embodiment of a traffic sign recognition method provided herein;
FIG. 2 is a schematic flow chart diagram illustrating another embodiment of a traffic sign recognition method provided herein;
fig. 3 is a schematic structural diagram of an embodiment of a traffic sign recognition apparatus provided in the present application.
Detailed Description
In order to make the technical solutions of the present application better understood, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
For ease of understanding, referring to fig. 1, an embodiment of a traffic sign recognition method provided herein includes:
It should be noted that, because there may be an image that does not meet the requirements in the acquired images, that is, an image that does not contain a traffic sign exists, in order to not affect the recognition result, the acquired traffic sign images may be screened, some images that do not contain a traffic sign and some blurred and unclear traffic sign images are screened, and the traffic sign image that meets the requirements is used as the finally acquired traffic sign image to be recognized.
And 102, inputting the traffic sign image to be recognized into the trained deep belief network model for feature extraction, and obtaining a first feature vector corresponding to the traffic sign image to be recognized.
It should be noted that the deep belief network model (DBN) in this embodiment is formed by fusing a two-layer deep Boltzmann Machine model (DBM) and a two-layer DBN model, where the DBM and the DBN model both use a Restricted Boltzmann Machine (RBM) as a basic constituent unit, and the difference is that there is a non-directional connection between DBM layers and a directional connection between DBN layers.
And the lower layer in the deep belief network model adopts two layers of DBMs with strong information reduction degree to perform preliminary dimensionality reduction on the traffic sign image to be recognized to obtain the characteristics with high integrity after denoising, the obtained characteristics are used as the input of two layers of DBNs, and then the characteristics of a higher layer are extracted through the two layers of DBNs. The method comprises the steps of carrying out unsupervised training and supervised fine tuning on the DBM and the DBN respectively to finally obtain a trained deep belief network model, and extracting high-level features of a traffic sign image to be recognized through the trained deep belief network model to help to improve the recognition result of a subsequent pulse neural network.
The input of the impulse neural network is represented by a pulse sequence, so that the extracted first feature vector needs to be converted into the first pulse sequence, so that the first feature vector can be adapted to the traffic sign identification by adopting the impulse neural network subsequently, and the impulse neural network can better identify the traffic sign.
And 104, inputting the first pulse sequence into the trained pulse neural network to obtain a recognition result output by the trained pulse neural network.
It should be noted that, in the conventional artificial neural network, the pulse emission frequency of the biological neuron is encoded, the output of the neuron is generally a simulation of a given interval, the computing capability and the biological authenticity of the neuron are weaker than those of the pulse neural network, and the trained pulse neural network is adopted to perform the traffic sign recognition, which is beneficial to improving the recognition result, so that the trained pulse neural network is adopted to perform the traffic sign recognition in the embodiment.
The applicant finds that in the prior art, the method of combining image matching, feature extraction and a classifier has the problems of more manual intervention, low identification accuracy and low speed. Therefore, in order to solve the problems in the prior art, the applicant proposes the traffic sign identification method provided in the embodiment of the present application, and the method achieves the following technical effects:
according to the traffic sign recognition method provided by the embodiment of the application, the trained deep belief network model is used for carrying out feature extraction on the traffic sign image, manual feature extraction is not needed, feature dimensionality reduction and feature selection are carried out on the input traffic sign image through the deep belief network model, manual intervention is greatly reduced, high-level features are extracted from the original traffic sign image, redundant noise information is screened out, the recognition result of a subsequent impulse neural network is improved, the traffic sign recognition is carried out in a mode of combining the deep belief network model and the impulse neural network, the recognition speed is improved, the advantages of the deep belief network model and the impulse neural network are fully utilized, the recognition result is improved, and the technical problems of low accuracy and low speed of the existing traffic sign recognition are solved.
For ease of understanding, referring to fig. 2, another embodiment of a traffic sign recognition method provided herein includes:
It should be noted that the traffic sign image to be trained of the traffic sign image set in the present embodiment is from a German traffic sign recognition database (GTSRB).
It should be noted that, in order to facilitate feature extraction of the deep belief network, size normalization processing may be performed on the to-be-trained traffic sign images in the to-be-trained traffic sign image set, size normalization may be performed on the to-be-trained traffic sign images by using a bilinear interpolation algorithm, and the to-be-trained traffic sign images after normalization have the same size, for example, the size is 48 × 48 or 28 × 28 pixels.
The GTSRB database has the traffic sign images with partial fuzzy watermarks in large area, and the traffic sign images can be screened to remove the traffic sign images with low quality and leave the traffic sign images with high quality, so that the method is beneficial to extracting beneficial characteristic information from a deep belief network, and the identification result of the impulse neural network is improved.
And 203, inputting the preprocessed traffic sign image to be trained into the trained deep belief network model for feature extraction to obtain a second feature vector corresponding to the traffic sign image to be trained.
And step 204, converting the second characteristic vector based on the time-lag phase coding to obtain a second pulse sequence.
It should be noted that, a common encoding manner is time lag encoding and phase encoding, in this embodiment, a manner of combining time lag encoding and phase encoding is used to encode the extracted second feature vector, that is, time lag phase encoding is used, and a better pulse sequence can be generated by using time lag phase encoding, which is beneficial to improving a traffic sign recognition result of the impulse neural network.
And step 205, inputting the second pulse sequence into the impulse neural network, and training the impulse neural network.
It should be noted that, the impulse neural network in this embodiment adopts LIF (leave integrated-and-Fire) neurons, wherein the first layer is a competitive layer and is composed of a plurality of LIF neurons, and the neurons in the competitive layer achieve the purpose of competitive learning in a side inhibition manner, and the side inhibition process specifically includes: each time a neuron bursts an action potential, it is immediately reset to the initial state and enters the refractory period, while all other neurons are reset to the resting membrane potential and enter the inhibitory period. Every time when a neuron sends a pulse, all the neurons are reset, then competition is restarted, and learning is carried out through competition, so that the pulse neural network is trained; the second layer of the impulse neural network is an output layer, the output layer outputs the similarity value of each category, the smaller the similarity value is, the higher the similarity is, the label of the category with the minimum similarity value is the final recognition result, and the specific steps of the similarity value calculation are as follows:
let the input image matrix be I ═ xij∈Rn×nAnd carrying out standardization processing on the input image matrix, namely:
x′ij=(xij-xmin)/(xmax-xmin)
wherein x ismax、xminAre respectively the maximum and minimum pixel values in I, x'ijTo normalize the processed input image.
Neurons with an assumed label L are separately assignedWherein the content of the first and second substances,is the mth neuron labeled L, MLThe number of neurons, L is 0,1, …, 9; the number of pulses issued by the mth neuron labeled L to the pulse sequence corresponding to the input image isThe m-th neuron labeled L corresponds to a receptive field weight matrix ofM with label LLMultiplying and accumulating the pulse number of each neuron and the receptive field weight to obtain a reconstructed image of the neuron labeled L to the input image, namely:
let R beL=rij∈Rn×nThe normalized image is similarly normalized to obtain a normalized reconstructed image r'ijCalculating the similarity value S between the normalized input image and the normalized reconstructed imageLThe specific similarity value calculation formula is as follows:
obtaining 10 similarity values S0,S1,…,S9Comparison S0,S1,…,S9The label of the class with the minimum similarity value is the final recognition result, and S is assumed to be6And if the minimum value is obtained, the class represented by the label 6 is the final recognition result.
The pulse neural network is trained by adopting a learning method of combining three-pulse STDP and threshold plasticity, the three-pulse STDP learning method is specific to synapses, and the threshold plasticity learning method is specific to threshold potentials of neurons, so that the issuing frequency of the neurons is limited by adopting the learning method of combining the three-pulse STDP and the threshold plasticity.
And step 206, calculating the recognition accuracy rate of the impulse neural network to the traffic sign image to be trained, and finishing the training when the recognition accuracy rate is higher than a threshold value to obtain the trained impulse neural network.
It should be noted that the recognition accuracy is obtained by calculating the ratio of the number of the correctly recognized traffic sign images to be trained to the number of all the images to be trained, and when the recognition accuracy is higher than a preset threshold, the training is considered to be completed, and the training is stopped, so that the trained impulse neural network is obtained.
And step 207, acquiring the traffic sign image to be identified.
And 208, inputting the traffic sign image to be recognized into the trained deep belief network model for feature extraction to obtain a first feature vector corresponding to the traffic sign image to be recognized.
Step 209 converts the first feature vector into a first pulse sequence.
And step 210, inputting the first pulse sequence into the trained pulse neural network, and acquiring the recognition result output by the trained pulse neural network.
It should be noted that steps 207 to 210 in the embodiment of the present application are the same as steps 101 to 104 in the previous embodiment, and are not repeated herein.
For easy understanding, referring to fig. 3, an embodiment of a traffic sign recognition apparatus according to the present invention includes:
the first image acquisition module 301 is configured to acquire a traffic sign image to be identified;
the first feature extraction module 302 is configured to input the traffic sign image to be recognized into a trained deep belief network model for feature extraction, so as to obtain a first feature vector corresponding to the traffic sign image to be recognized;
a first conversion module 303 for converting the first feature vector into a first pulse sequence;
and the identification module 304 is configured to input the first pulse sequence into the trained spiking neural network, and obtain an identification result output by the trained spiking neural network.
Further, still include:
a second image obtaining module 305, configured to obtain a traffic sign image set to be trained;
the second feature extraction module 306 is configured to input the traffic sign images to be trained in the traffic sign image set to be trained into the trained deep belief network model for feature extraction, so as to obtain second feature vectors corresponding to the traffic sign images to be trained;
a second conversion module 307, configured to convert the second eigenvector by using time-lag phase coding to obtain a second pulse sequence;
the training module 308 is configured to input the second pulse sequence into the spiking neural network, and train the spiking neural network;
and the calculating module 309 is configured to calculate the recognition accuracy of the to-be-trained traffic sign image by the impulse neural network, and when the recognition accuracy is higher than a threshold value, the training is completed to obtain the trained impulse neural network.
Further, still include:
and the preprocessing module 310 is configured to preprocess the traffic sign image to be trained.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes instructions for executing all or part of the steps of the method described in the embodiments of the present application through a computer device (which may be a personal computer, a server, or a network device). And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions in the embodiments of the present application.
Claims (10)
1. A traffic sign recognition method, comprising:
acquiring a traffic sign image to be identified;
inputting the traffic sign image to be recognized into a trained deep belief network model for feature extraction to obtain a first feature vector corresponding to the traffic sign image to be recognized;
converting the first feature vector into a first pulse sequence;
inputting the first pulse sequence into a trained pulse neural network, obtaining a recognition result output by the trained pulse neural network, specifically, inputting the first pulse sequence into the trained pulse neural network, issuing pulses to the first pulse sequence through neurons corresponding to each label in the trained pulse neural network, multiplying and accumulating the number of pulses issued by the neurons of the same label and a corresponding receptive field weight matrix to obtain a reconstructed image of the traffic sign image to be recognized by the neurons corresponding to each label, and calculating similarity values of the traffic sign image to be recognized after the standardization processing and the reconstructed image after the standardization processing, wherein the label with the smallest similarity value is the final recognition result;
wherein, the acquisition formula of the reconstructed image is as follows:
in the formula (I), the compound is shown in the specification,R L is a label ofLThe neurons of the image data to be identified are reconstructed images of the traffic sign image,is a label ofLTo (1) amThe number of the nerve cells is one,is a label ofLThe number of the neurons of (a) is,is a label ofTo (1)The number of pulses issued by the first pulse sequence corresponding to the traffic sign image to be recognized by each neuron,is a label ofTo (1) aA receptive field weight matrix corresponding to each neuron;
the calculation formula of the similarity value is as follows:
in the formula (I), the compound is shown in the specification,S L for standardized traffic sign images to be recognizedAnd a label ofLThe reconstructed image after the normalization processing corresponding to the neuronThe similarity value of (a) is calculated,nfor the row number and the column number of the traffic sign image to be identified or the reconstructed image after the standardization processing,i、jrespectively the row coordinate value and the column coordinate value of the pixel points in the traffic sign image to be recognized or the reconstructed image after the standardization treatment.
2. The traffic sign recognition method of claim 1, further comprising:
acquiring a traffic sign image set to be trained;
inputting the traffic sign images to be trained in the traffic sign image set to be trained into a trained deep belief network model for feature extraction to obtain second feature vectors corresponding to the traffic sign images to be trained;
converting the second eigenvector based on time-lag phase coding to obtain a second pulse sequence;
inputting the second pulse sequence into a pulse neural network, and training the pulse neural network;
and calculating the recognition accuracy rate of the impulse neural network to the traffic sign image to be trained, and finishing training when the recognition accuracy rate is higher than a threshold value to obtain the trained impulse neural network.
3. The method for recognizing a traffic sign according to claim 2, wherein the step of inputting the traffic sign image to be trained in the traffic sign image set to be trained into a trained deep belief network model for feature extraction to obtain a second feature vector corresponding to the traffic sign image to be trained further comprises:
and preprocessing the traffic sign image to be trained.
4. The traffic sign recognition method of claim 3, wherein the preprocessing comprises:
and carrying out size normalization processing on the traffic sign image to be trained based on a bilinear interpolation algorithm to obtain the traffic sign image to be trained after normalization processing.
5. The method of claim 2, wherein the inputting the second pulse sequence into a spiking neural network, training the spiking neural network, comprises:
inputting the second pulse sequence into a pulse neural network, and training the pulse neural network based on a learning method of combining three-pulse STDP and threshold plasticity.
6. A traffic sign recognition apparatus, comprising:
the first image acquisition module is used for acquiring a traffic sign image to be identified;
the first feature extraction module is used for inputting the traffic sign image to be recognized into a trained deep belief network model for feature extraction to obtain a first feature vector corresponding to the traffic sign image to be recognized;
a first conversion module for converting the first feature vector into a first pulse sequence;
the identification module is configured to input the first pulse sequence into the trained impulse neural network, obtain an identification result output by the trained impulse neural network, specifically, input the first pulse sequence into the trained impulse neural network, issue pulses to the first pulse sequence through neurons corresponding to each label in the trained impulse neural network, multiply and accumulate the number of pulses issued by neurons of the same label and a corresponding receptive field weight matrix to obtain a reconstructed image of the to-be-identified traffic sign image by the neurons corresponding to each label, and calculate similarity values of the to-be-identified traffic sign image after the normalization processing and the reconstructed images after the normalization processing, where a label with a smallest similarity value is a final identification result;
wherein, the acquisition formula of the reconstructed image is as follows:
in the formula (I), the compound is shown in the specification,R L is a label ofLThe neuron of (2) reconstructs an image of the traffic sign image to be recognized,is a label ofLTo (1) amThe number of the nerve cells is one,is a label ofLThe number of the neurons in the neural network,is a label ofTo (1) aThe number of pulses emitted by the first pulse sequence corresponding to the traffic sign image to be identified by each neuron,is a label ofTo (1) aA receptive field weight matrix corresponding to each neuron;
the calculation formula of the similarity value is as follows:
in the formula (I), the compound is shown in the specification,S L for standardized traffic sign images to be recognizedAnd a label ofLThe reconstructed image after the normalization processing corresponding to the neuronThe similarity value of (a) is calculated,nthe number of rows and columns of the traffic sign images to be identified or the reconstructed images after the standardization processing,i、jrespectively the row coordinate value and the column coordinate value of the pixel points in the traffic sign image to be recognized or the reconstructed image after the standardization treatment.
7. The traffic sign recognition device of claim 6, further comprising:
the second image acquisition module is used for acquiring a traffic sign image set to be trained;
the second feature extraction module is used for inputting the traffic sign images to be trained in the traffic sign image set to be trained into the trained deep belief network model for feature extraction to obtain second feature vectors corresponding to the traffic sign images to be trained;
the second conversion module is used for converting the second characteristic vector based on time-lag phase coding to obtain a second pulse sequence;
the training module is used for inputting the second pulse sequence into a pulse neural network and training the pulse neural network;
and the calculation module is used for calculating the recognition accuracy of the impulse neural network on the traffic sign image to be trained, and when the recognition accuracy is higher than a threshold value, the training is completed to obtain the trained impulse neural network.
8. The traffic sign recognition device of claim 7, further comprising:
and the preprocessing module is used for preprocessing the traffic sign image to be trained.
9. A traffic sign recognition apparatus, comprising: the apparatus includes a processor and a memory;
the memory is used for storing program codes and transmitting the program codes to the processor;
the processor is configured to execute the traffic sign recognition method according to any one of claims 1 to 5 according to instructions in the program code.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium is configured to store a program code for executing the traffic sign recognition method according to any one of claims 1-5.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910764738.8A CN110458136B (en) | 2019-08-19 | 2019-08-19 | Traffic sign identification method, device and equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910764738.8A CN110458136B (en) | 2019-08-19 | 2019-08-19 | Traffic sign identification method, device and equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110458136A CN110458136A (en) | 2019-11-15 |
CN110458136B true CN110458136B (en) | 2022-07-12 |
Family
ID=68487609
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910764738.8A Active CN110458136B (en) | 2019-08-19 | 2019-08-19 | Traffic sign identification method, device and equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110458136B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113395331B (en) * | 2021-05-25 | 2022-03-15 | 郑州信大捷安信息技术股份有限公司 | Safety traffic sign error surveying method and system based on Internet of vehicles |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103235937A (en) * | 2013-04-27 | 2013-08-07 | 武汉大学 | Pulse-coupled neural network-based traffic sign identification method |
CN109522448A (en) * | 2018-10-18 | 2019-03-26 | 天津大学 | A method of robustness speech Gender Classification is carried out based on CRBM and SNN |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8990132B2 (en) * | 2010-01-19 | 2015-03-24 | James Ting-Ho Lo | Artificial neural networks based on a low-order model of biological neural networks |
CN102262728B (en) * | 2011-07-28 | 2012-12-19 | 电子科技大学 | Road traffic sign identification method |
US9098811B2 (en) * | 2012-06-04 | 2015-08-04 | Brain Corporation | Spiking neuron network apparatus and methods |
CN109816026B (en) * | 2019-01-29 | 2021-09-10 | 清华大学 | Fusion device and method of convolutional neural network and impulse neural network |
-
2019
- 2019-08-19 CN CN201910764738.8A patent/CN110458136B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103235937A (en) * | 2013-04-27 | 2013-08-07 | 武汉大学 | Pulse-coupled neural network-based traffic sign identification method |
CN109522448A (en) * | 2018-10-18 | 2019-03-26 | 天津大学 | A method of robustness speech Gender Classification is carried out based on CRBM and SNN |
Non-Patent Citations (2)
Title |
---|
A handwritten numeral recognition method based on STDP based with unsupervised learning;Yonghong Xie,et.al;《2017 International Conference on Wireless Communications, Signal Processing and Networking (WiSPNET)》;20170324;全文 * |
类脑机的思想与体系结构综述;黄铁军等;《计算机研究与发展》;20190615(第006期);全文 * |
Also Published As
Publication number | Publication date |
---|---|
CN110458136A (en) | 2019-11-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20230059877A1 (en) | Deep learning-based splice site classification | |
CN109615582B (en) | Face image super-resolution reconstruction method for generating countermeasure network based on attribute description | |
CN110287805B (en) | Micro-expression identification method and system based on three-stream convolutional neural network | |
CN110399821B (en) | Customer satisfaction acquisition method based on facial expression recognition | |
CN104268568B (en) | Activity recognition method based on Independent subspace network | |
CN109344731B (en) | Lightweight face recognition method based on neural network | |
Yan et al. | Multi-attributes gait identification by convolutional neural networks | |
CN112560810B (en) | Micro-expression recognition method based on multi-scale space-time characteristic neural network | |
Mallouh et al. | Utilizing CNNs and transfer learning of pre-trained models for age range classification from unconstrained face images | |
CN108985252B (en) | Improved image classification method of pulse depth neural network | |
Liu et al. | Dictionary learning for VQ feature extraction in ECG beats classification | |
CN107194376A (en) | Mask fraud convolutional neural networks training method and human face in-vivo detection method | |
Caroppo et al. | Comparison between deep learning models and traditional machine learning approaches for facial expression recognition in ageing adults | |
CN112784929B (en) | Small sample image classification method and device based on double-element group expansion | |
CN107292267A (en) | Photo fraud convolutional neural networks training method and human face in-vivo detection method | |
CN110414541B (en) | Method, apparatus, and computer-readable storage medium for identifying an object | |
CN107301396A (en) | Video fraud convolutional neural networks training method and human face in-vivo detection method | |
CN113989890A (en) | Face expression recognition method based on multi-channel fusion and lightweight neural network | |
CN108021950B (en) | Image classification method based on low-rank sparse representation | |
Duffner | Face image analysis with convolutional neural networks | |
He et al. | What catches the eye? Visualizing and understanding deep saliency models | |
CN107239827B (en) | Spatial information learning method based on artificial neural network | |
CN110458136B (en) | Traffic sign identification method, device and equipment | |
Kharghanian et al. | Pain detection using batch normalized discriminant restricted Boltzmann machine layers | |
CN107766790B (en) | Human behavior identification method based on local constraint low-rank coding |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |