CN114973207A - Road sign identification method based on target detection - Google Patents
Road sign identification method based on target detection Download PDFInfo
- Publication number
- CN114973207A CN114973207A CN202210913244.3A CN202210913244A CN114973207A CN 114973207 A CN114973207 A CN 114973207A CN 202210913244 A CN202210913244 A CN 202210913244A CN 114973207 A CN114973207 A CN 114973207A
- Authority
- CN
- China
- Prior art keywords
- output
- detection model
- input end
- road sign
- point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 70
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000012549 training Methods 0.000 claims abstract description 19
- 238000000605 extraction Methods 0.000 claims abstract description 8
- 238000012545 processing Methods 0.000 claims abstract description 8
- 238000007781 pre-processing Methods 0.000 claims abstract description 4
- 208000037170 Delayed Emergence from Anesthesia Diseases 0.000 claims description 44
- 230000009191 jumping Effects 0.000 claims description 10
- 230000008569 process Effects 0.000 claims description 7
- 230000008859 change Effects 0.000 claims description 4
- 230000009286 beneficial effect Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000011161 development Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 230000004913 activation Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000005286 illumination Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000007547 defect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/582—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/09—Recognition of logos
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Health & Medical Sciences (AREA)
- Multimedia (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- General Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a road sign identification method based on target detection, which comprises the following steps: s1, collecting road sign images, and preprocessing the road sign images to obtain contour data; s2, extracting feature data of the contour data through an LSTM feature extraction module; s3, constructing the feature data and the corresponding labels into a training data set; s4, training the target detection model by adopting the training data set to obtain a trained target detection model; s5, processing the feature data of the road sign image to be recognized by adopting the trained target detection model to obtain a corresponding road sign type; the invention solves the problem of low identification accuracy of the existing target detection method.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a road sign identification method based on target detection.
Background
With the development of society, more and more people begin to pay attention to the development of unmanned technology, and as one of the components of comprehensive technology, the technology of identifying objects through algorithm detection has kept its own status. Because deep learning technology has made a certain breakthrough in recent years, the application technology of convolutional neural networks is more and more mature. Meanwhile, because related neural networks such as CNN have own unique advantages in the intelligent identification process, the development of a target detection algorithm combined with deep learning in recent years becomes an important development direction of the current identification detection algorithm. However, most of the existing target detection methods directly process the target image by using the CNN neural network, and the recognition accuracy is not high.
Disclosure of Invention
Aiming at the defects in the prior art, the road sign identification method based on target detection provided by the invention solves the problem of low identification accuracy of the existing target detection method.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: a road sign identification method based on target detection comprises the following steps:
s1, collecting road sign images, and preprocessing the road sign images to obtain contour data;
s2, extracting feature data of the contour data through an LSTM feature extraction module;
s3, constructing the feature data and the corresponding labels into a training data set;
s4, training the target detection model by adopting the training data set to obtain a trained target detection model;
and S5, processing the feature data of the road sign image to be recognized by adopting the trained target detection model to obtain the corresponding road sign type.
Further, the step S1 includes the following sub-steps:
s11, collecting road sign images;
and S12, extracting the contour of the road sign image to obtain contour data.
Further, the step S12 includes the following sub-steps:
s121, selecting any pixel point from the road sign image as a standard point;
s122, calculating the color distance between other pixel points in the road sign image and the standard point to obtain a plurality of color distance values;
s123, carrying out gray processing on the pixel points corresponding to the color distance values lower than the color threshold, and giving the same gray value to the pixel points corresponding to the color distance values lower than the color threshold;
s124, taking the pixel point corresponding to the color distance value higher than the color threshold value as a new standard point;
s125, calculating the color distance value between the pixel point of the rest part after graying and the new standard point, and jumping to the step S123 until the road sign image is grayed into a grayscale image with different grayscale value areas;
s126, selecting pixel points in a non-edge area in the gray image as undetermined points;
s127, judging whether the gray value of the adjacent 9 pixel points of the undetermined point is the same as the gray value of the undetermined point, if so, determining the undetermined point as a point to be deleted, and jumping to the step S128, otherwise, reserving the undetermined point and jumping to the step S129;
s128, randomly finding a pixel point from the neighborhood of the point to be deleted to serve as a new point to be deleted, and skipping to the step S127 until all pixel points in the non-edge area in all the gray level images are traversed;
and S129, deleting the points to be deleted, wherein all the pixels of the points to be deleted and the edge area form contour data.
The beneficial effects of the above further scheme are as follows: the method comprises the steps that the color presenting degrees of the road sign image are slightly different due to the influence of illumination, a gray value is given to the area with the same type of color by setting a color threshold value, the color range to which the pixel point does not belong is found in an iterative mode, and therefore another gray value is given to the pixel point, the graying of the road sign image is realized, and each color area is grayed into different gray values; by searching 9 adjacent pixel points around the undetermined point, if the gray values of the 9 pixel points are the same as the gray value of the undetermined point, the non-contour area is indicated, and finally, all the points in the non-contour area need to be deleted, and the rest is contour data.
Further, the calculation formula of the color distance value in step S122 is:
wherein,is the color distance value between the pixel point and the standard point,of pixel colourThe passage is provided with a plurality of channels,of standard dot colourThe passage is provided with a plurality of channels,of pixel colourThe passage is provided with a plurality of channels,of standard dot colourThe passage is provided with a plurality of channels,of pixel colourThe passage is provided with a plurality of channels,of standard dot colourA channel.
Further, the structure of the target detection model in step S4 includes: the first residual block, the second residual block, the third residual block, the fourth residual block, the first Maxpool, the second Maxpool, the third Maxpool, the Concat layer, the first Conv, the BN layer and the second Conv;
the input end of the first residual block is used as the input end of the target detection model, and the output end of the first residual block is respectively connected with the input end of the second residual block and the input end of the first Maxpool; the output end of the second residual block is respectively connected with the input end of a third residual block and the input end of a second Maxpool; the output end of the third residual block is respectively connected with the input end of the fourth residual block and the input end of the third Maxpool; a first input end of the Concat layer is connected with an output end of the first Maxpool, a second input end of the Concat layer is connected with an output end of the second Maxpool, a third input end of the Concat layer is connected with an output end of the third Maxpool, a fourth input end of the Concat layer is connected with an output end of the fourth residual block, and an output end of the Concat layer is connected with an input end of the first Conv; the input end of the BN layer is connected with the output layer of the first Conv, and the output end of the BN layer is connected with the input end of the second Conv; and the output end of the second Conv is used as the output end of the target detection model.
Further, the window size of the first Maxpool is 3 x 3; the window size of the second Maxpool is 5 x 5; the third Maxpool has a window size of 7 x 7.
The beneficial effects of the above further scheme are as follows: the characteristics are extracted layer by layer through the first residual block, the second residual block, the third residual block and the fourth residual block, the extracted characteristics of each layer are input into the maximum pooling layer, the characteristics of different degrees are reserved through windows of different sizes, the characteristics are collected through the Concat layer, the richness of the characteristics is reserved to the maximum degree, and the accuracy of target identification is improved.
Further, the loss function of the training process in step S4 is:
wherein,in order to obtain the value of the loss,is the actual output of the object detection model,for the predicted output of the object detection model,the abscissa of the geometric center of the region output for prediction of the object detection model,the ordinate of the geometric center of the region that is the prediction output of the object detection model,is the abscissa of the geometric center of the region of the actual output of the object detection model,is the ordinate of the geometric center of the region of the actual output of the object detection model,to cover the actual output of the object detection modelAnd the predicted output of the object detection modelThe linear distance between the two farthest pixels in the region of (a),detecting a model for an objectActual output ofAnd the prediction output of the target detection modelOverlap rate of change.
The beneficial effects of the above further scheme are as follows: according to the method, the difference between the actual output and the predicted output in the training process is measured through the intersection of the actual output and the predicted output, the union ratio of the actual output and the predicted output, the ratio of the distance between the actual output center and the actual output center to the linear distance between the two farthest pixel points in the area covering the actual output and the predicted output, and the change rate of the overlapping area, so that the actual output approaches the predicted output.
In conclusion, the beneficial effects of the invention are as follows: according to the method, the road sign image is preprocessed, the key outline data is extracted, the LSTM characteristic extraction module is used for extracting the characteristic data, the characteristic data and the corresponding label are used for training the target detection model, on one hand, the data volume is reduced, on the other hand, the target detection model accurately captures the corresponding relation between the characteristic data and the corresponding label through the characteristic data and the corresponding label, and the target identification accuracy is improved.
Drawings
FIG. 1 is a flow chart of a landmark identification method based on target detection;
FIG. 2 is a schematic diagram of the structure of a cell unit of the LSTM feature extraction module;
fig. 3 is a schematic structural diagram of an object detection model.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, a landmark identification method based on target detection includes the following steps:
s1, collecting road sign images, and preprocessing the road sign images to obtain contour data;
the step S1 includes the following sub-steps:
s11, collecting road sign images;
and S12, extracting the contour of the road sign image to obtain contour data.
The step S12 includes the following sub-steps:
s121, selecting any pixel point from the road sign image as a standard point;
s122, calculating the color distance between other pixel points in the road sign image and the standard point to obtain a plurality of color distance values;
the calculation formula of the color distance value in step S122 is:
wherein,is the color distance value between the pixel point and the standard point,of pixel colourThe passage is provided with a plurality of channels,of standard dot colourThe passage is provided with a plurality of channels,is a pixel pointOf a colourThe passage is provided with a plurality of channels,of standard dot colourThe passage is provided with a plurality of channels,of pixel colourThe passage is provided with a plurality of channels,of standard dot colourA channel.
S123, carrying out gray processing on the pixel points corresponding to the color distance values lower than the color threshold, and giving the same gray value to the pixel points corresponding to the color distance values lower than the color threshold;
s124, taking the pixel point corresponding to the color distance value higher than the color threshold value as a new standard point;
s125, calculating the color distance value between the pixel point of the rest part after graying and the new standard point, and jumping to the step S123 until the road sign image is grayed into a grayscale image with different grayscale value areas;
s126, selecting pixel points in a non-edge area in the gray image as undetermined points;
s127, judging whether the gray value of the adjacent 9 pixel points of the undetermined point is the same as the gray value of the undetermined point, if so, determining the undetermined point as a point to be deleted, and jumping to the step S128, otherwise, reserving the undetermined point and jumping to the step S129;
s128, randomly finding a pixel point from the neighborhood of the point to be deleted to serve as a new point to be deleted, and skipping to the step S127 until all pixel points in the non-edge area in all the gray level images are traversed;
and S129, deleting the points to be deleted, wherein all the pixels of the points to be deleted and the edge area form contour data.
The road sign image has slightly different color presenting degrees due to the influence of illumination, one gray value is given to the area of the same type of color by setting a color threshold value, and the color range to which the pixel point does not belong is found in an iterative mode, so that the other gray value is given to the pixel point, the graying of the road sign image is realized, and each color area is grayed into different gray values; by searching 9 adjacent pixel points around the undetermined point, if the gray values of the 9 pixel points are the same as the gray value of the undetermined point, the non-contour area is indicated, and finally, all the points in the non-contour area need to be deleted, and the rest is contour data.
S2, extracting feature data of the contour data through an LSTM feature extraction module;
fig. 2 shows the cell units of the LSTM feature extraction module in step S2, where the input and output relationships of the cell units are:
wherein,in order to reset the output of the gate,is composed ofThe output of the cell unit at the time of day,is the input of the cell unit and is,in order to reset the weight of the gate,in order to reset the bias of the gate,is the output of the input gate or gates,to enter the weight of the gate(s),in order to input the offset of the gate,in order to memorize the output of the gate,in order to memorize the weight of the gate,in order to memorize the offset of the door,is composed ofThe output of the cell unit at the time of day,to be the weight of the output gate,in order to output the offset of the gate,for the purpose of the hyperbolic tangent activation function,is an activation function.
S3, constructing the feature data and the corresponding labels into a training data set;
s4, training the target detection model by adopting the training data set to obtain a trained target detection model;
as shown in fig. 3, the structure of the object detection model in step S4 includes: a first residual block, a second residual block, a third residual block, a fourth residual block, a first Maxpool, a second Maxpool, a third Maxpool, a Concat layer, a first Conv, a BN layer, and a second Conv;
the input end of the first residual block is used as the input end of the target detection model, and the output end of the first residual block is respectively connected with the input end of the second residual block and the input end of the first Maxpool; the output end of the second residual block is respectively connected with the input end of a third residual block and the input end of a second Maxpool; the output end of the third residual block is respectively connected with the input end of the fourth residual block and the input end of the third Maxpool; a first input end of the Concat layer is connected with an output end of the first Maxpool, a second input end of the Concat layer is connected with an output end of the second Maxpool, a third input end of the Concat layer is connected with an output end of the third Maxpool, a fourth input end of the Concat layer is connected with an output end of the fourth residual block, and an output end of the Concat layer is connected with an input end of the first Conv; the input end of the BN layer is connected with the output layer of the first Conv, and the output end of the BN layer is connected with the input end of the second Conv; and the output end of the second Conv is used as the output end of the target detection model.
The window size of the first Maxpool is 3 x 3; the window size of the second Maxpool is 5 x 5; the window size of the third Maxpool is 7 × 7.
The characteristics are extracted layer by layer through the first residual block, the second residual block, the third residual block and the fourth residual block, the extracted characteristics of each layer are input into the maximum pooling layer, the characteristics of different degrees are reserved through windows of different sizes, the characteristics are collected through the Concat layer, the richness of the characteristics is reserved to the maximum degree, and the accuracy of target identification is improved.
The loss function of the training process in step S4 is:
wherein,in order to obtain the value of the loss,is the actual output of the object detection model,is the predicted output of the object detection model,the abscissa of the geometric center of the region output for prediction of the object detection model,the ordinate of the geometric center of the region that is the prediction output of the object detection model,is the abscissa of the geometric center of the region of the actual output of the object detection model,is the ordinate of the geometric center of the region of the actual output of the object detection model,to cover the actual output of the target detection modelAnd the predicted output of the object detection modelThe linear distance between the two farthest pixels in the region of (a),detecting the actual output of the model for the objectAnd the prediction output of the target detection modelThe area of (2) is the image area formed by the pixel data.
According to the method, the difference between the actual output and the predicted output in the training process is measured through the intersection of the actual output and the predicted output, the union ratio of the actual output and the predicted output, the ratio of the distance between the actual output center and the actual output center to the linear distance between the two farthest pixel points in the area covering the actual output and the predicted output, and the change rate of the overlapping area, so that the actual output approaches the predicted output.
And S5, processing the feature data of the road sign image to be recognized by adopting the trained target detection model to obtain the corresponding road sign type.
In conclusion, the beneficial effects of the invention are as follows: according to the method, the road sign image is preprocessed, the key outline data is extracted, the LSTM characteristic extraction module is used for extracting the characteristic data, the characteristic data and the corresponding label are used for training the target detection model, on one hand, the data volume is reduced, on the other hand, the target detection model accurately captures the corresponding relation between the characteristic data and the corresponding label through the characteristic data and the corresponding label, and the target identification accuracy is improved.
Claims (7)
1. A road sign identification method based on target detection is characterized by comprising the following steps:
s1, collecting road sign images, and preprocessing the road sign images to obtain contour data;
s2, extracting feature data of the contour data through an LSTM feature extraction module;
s3, constructing the feature data and the corresponding labels into a training data set;
s4, training the target detection model by adopting the training data set to obtain a trained target detection model;
and S5, processing the feature data of the road sign image to be recognized by adopting the trained target detection model to obtain the corresponding road sign type.
2. The method for identifying road signs based on object detection as claimed in claim 1, wherein the step S1 comprises the following sub-steps:
s11, collecting road sign images;
and S12, extracting the contour of the road sign image to obtain contour data.
3. A landmark recognition method based on object detection according to claim 2, wherein the S12 includes the following sub-steps:
s121, selecting any pixel point from the road sign image as a standard point;
s122, calculating the color distance between other pixel points in the road sign image and the standard point to obtain a plurality of color distance values;
s123, carrying out gray processing on the pixel points corresponding to the color distance values lower than the color threshold, and giving the same gray value to the pixel points corresponding to the color distance values lower than the color threshold;
s124, taking the pixel point corresponding to the color distance value higher than the color threshold value as a new standard point;
s125, calculating the color distance value between the pixel point of the rest part after graying and the new standard point, and jumping to the step S123 until the road sign image is grayed into a grayscale image with different grayscale value areas;
s126, selecting pixel points in a non-edge area in the gray image as undetermined points;
s127, judging whether the gray value of the adjacent 9 pixel points of the undetermined point is the same as the gray value of the undetermined point, if so, determining the undetermined point as a point to be deleted, and jumping to the step S128, otherwise, reserving the undetermined point and jumping to the step S129;
s128, randomly finding a pixel point from the neighborhood of the point to be deleted as a new point to be deleted, and jumping to the step S127 until all pixel points in the non-edge area in all gray level images are traversed;
and S129, deleting the points to be deleted, wherein all the pixels of the points to be deleted and the edge area form contour data.
4. A landmark recognition method based on object detection according to claim 2, wherein the color distance value in S122 is calculated by the formula:
wherein,is the color distance value between the pixel point and the standard point,of pixel colourThe passage is provided with a plurality of channels,of standard dot colourThe passage is provided with a plurality of channels,of pixel colourThe passage is provided with a plurality of channels,of standard dot colourThe passage is provided with a plurality of channels,of pixel colourThe passage is provided with a plurality of channels,of standard dot colourA channel.
5. The method of claim 1, wherein the structure of the object detection model in S4 comprises: a first residual block, a second residual block, a third residual block, a fourth residual block, a first Maxpool, a second Maxpool, a third Maxpool, a Concat layer, a first Conv, a BN layer, and a second Conv;
the input end of the first residual block is used as the input end of the target detection model, and the output end of the first residual block is respectively connected with the input end of the second residual block and the input end of the first Maxpool; the output end of the second residual block is respectively connected with the input end of a third residual block and the input end of a second Maxpool; the output end of the third residual block is respectively connected with the input end of the fourth residual block and the input end of the third Maxpool; a first input end of the Concat layer is connected with an output end of the first Maxpool, a second input end of the Concat layer is connected with an output end of the second Maxpool, a third input end of the Concat layer is connected with an output end of the third Maxpool, a fourth input end of the Concat layer is connected with an output end of the fourth residual block, and an output end of the Concat layer is connected with an input end of the first Conv; the input end of the BN layer is connected with the output layer of the first Conv, and the output end of the BN layer is connected with the input end of the second Conv; and the output end of the second Conv is used as the output end of the target detection model.
6. A method as claimed in claim 5, wherein the window size of the first Maxpool is 3 x 3; the window size of the second Maxpool is 5 x 5; the window size of the third Maxpool is 7 × 7.
7. The method of claim 1, wherein the loss function of the training process in the S4 is:
wherein,in order to obtain the value of the loss,is the actual output of the object detection model,for the predicted output of the object detection model,the abscissa of the geometric center of the region output for prediction of the object detection model,for object detectionThe ordinate of the geometric center of the region of the prediction output of the model,is the abscissa of the geometric center of the region of the actual output of the object detection model,is the ordinate of the geometric center of the region of the actual output of the object detection model,to cover the actual output of the target detection modelAnd the predicted output of the object detection modelThe linear distance between the two farthest pixels in the region of (a),detecting the actual output of the model for the objectAnd the prediction output of the target detection modelOverlap rate of change.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210913244.3A CN114973207B (en) | 2022-08-01 | 2022-08-01 | Road sign identification method based on target detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210913244.3A CN114973207B (en) | 2022-08-01 | 2022-08-01 | Road sign identification method based on target detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114973207A true CN114973207A (en) | 2022-08-30 |
CN114973207B CN114973207B (en) | 2022-10-21 |
Family
ID=82970100
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210913244.3A Expired - Fee Related CN114973207B (en) | 2022-08-01 | 2022-08-01 | Road sign identification method based on target detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114973207B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116071778A (en) * | 2023-03-31 | 2023-05-05 | 成都运荔枝科技有限公司 | Cold chain food warehouse management method |
CN116188585A (en) * | 2023-04-24 | 2023-05-30 | 成都垣景科技有限公司 | Mountain area photovoltaic target positioning method based on unmanned aerial vehicle photogrammetry |
CN116403094A (en) * | 2023-06-08 | 2023-07-07 | 成都菁蓉联创科技有限公司 | Embedded image recognition method and system |
CN117036923A (en) * | 2023-10-08 | 2023-11-10 | 广东海洋大学 | Underwater robot target detection method based on machine vision |
Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104200431A (en) * | 2014-08-21 | 2014-12-10 | 浙江宇视科技有限公司 | Processing method and processing device of image graying |
CN106529482A (en) * | 2016-11-14 | 2017-03-22 | 叶瀚礼 | Traffic road sign identification method adopting set distance |
CN107122701A (en) * | 2017-03-03 | 2017-09-01 | 华南理工大学 | A kind of traffic route sign based on saliency and deep learning |
CN108664969A (en) * | 2018-04-28 | 2018-10-16 | 西安电子科技大学 | Landmark identification method based on condition random field |
CN110414417A (en) * | 2019-07-25 | 2019-11-05 | 电子科技大学 | A kind of traffic mark board recognition methods based on multi-level Fusion multi-scale prediction |
CN110929697A (en) * | 2019-12-17 | 2020-03-27 | 中国人民解放军海军航空大学 | Neural network target identification method and system based on residual error structure |
CN111259818A (en) * | 2020-01-18 | 2020-06-09 | 苏州浪潮智能科技有限公司 | Road sign identification method, system and device |
CN111428556A (en) * | 2020-02-17 | 2020-07-17 | 浙江树人学院(浙江树人大学) | Traffic sign recognition method based on capsule neural network |
CN111444847A (en) * | 2020-03-27 | 2020-07-24 | 广西综合交通大数据研究院 | Traffic sign detection and identification method, system, device and storage medium |
CN111476284A (en) * | 2020-04-01 | 2020-07-31 | 网易(杭州)网络有限公司 | Image recognition model training method, image recognition model training device, image recognition method, image recognition device and electronic equipment |
WO2020173022A1 (en) * | 2019-02-25 | 2020-09-03 | 平安科技(深圳)有限公司 | Vehicle violation identifying method, server and storage medium |
CN111680706A (en) * | 2020-06-17 | 2020-09-18 | 南开大学 | Double-channel output contour detection method based on coding and decoding structure |
US20200394443A1 (en) * | 2019-06-12 | 2020-12-17 | Wipro Limited | Method and system for classifying an object in input data using artificial neural network model |
CN113255555A (en) * | 2021-06-04 | 2021-08-13 | 清华大学 | Method, system, processing equipment and storage medium for identifying Chinese traffic sign board |
CN113269161A (en) * | 2021-07-16 | 2021-08-17 | 四川九通智路科技有限公司 | Traffic signboard detection method based on deep learning |
CN113838011A (en) * | 2021-09-13 | 2021-12-24 | 中南大学 | Rock block degree and/or distribution rule obtaining method, system, terminal and readable storage medium based on digital image color gradient |
CN114037960A (en) * | 2022-01-11 | 2022-02-11 | 合肥金星智控科技股份有限公司 | Flap valve state identification method and system based on machine vision |
WO2022033580A1 (en) * | 2020-08-14 | 2022-02-17 | 北京至真互联网技术有限公司 | Retinal vessel arteriovenous distinguishing method, apparatus and device |
US20220066456A1 (en) * | 2016-02-29 | 2022-03-03 | AI Incorporated | Obstacle recognition method for autonomous robots |
CN114267025A (en) * | 2021-12-07 | 2022-04-01 | 天津大学 | Traffic sign detection method based on high-resolution network and light-weight attention mechanism |
CN114494870A (en) * | 2022-01-21 | 2022-05-13 | 山东科技大学 | Double-time-phase remote sensing image change detection method, model construction method and device |
WO2022099598A1 (en) * | 2020-11-13 | 2022-05-19 | 浙江大学 | Video dynamic target detection method based on relative statistical features of image pixels |
-
2022
- 2022-08-01 CN CN202210913244.3A patent/CN114973207B/en not_active Expired - Fee Related
Patent Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104200431A (en) * | 2014-08-21 | 2014-12-10 | 浙江宇视科技有限公司 | Processing method and processing device of image graying |
US20220066456A1 (en) * | 2016-02-29 | 2022-03-03 | AI Incorporated | Obstacle recognition method for autonomous robots |
CN106529482A (en) * | 2016-11-14 | 2017-03-22 | 叶瀚礼 | Traffic road sign identification method adopting set distance |
CN107122701A (en) * | 2017-03-03 | 2017-09-01 | 华南理工大学 | A kind of traffic route sign based on saliency and deep learning |
CN108664969A (en) * | 2018-04-28 | 2018-10-16 | 西安电子科技大学 | Landmark identification method based on condition random field |
WO2020173022A1 (en) * | 2019-02-25 | 2020-09-03 | 平安科技(深圳)有限公司 | Vehicle violation identifying method, server and storage medium |
US20200394443A1 (en) * | 2019-06-12 | 2020-12-17 | Wipro Limited | Method and system for classifying an object in input data using artificial neural network model |
CN110414417A (en) * | 2019-07-25 | 2019-11-05 | 电子科技大学 | A kind of traffic mark board recognition methods based on multi-level Fusion multi-scale prediction |
CN110929697A (en) * | 2019-12-17 | 2020-03-27 | 中国人民解放军海军航空大学 | Neural network target identification method and system based on residual error structure |
CN111259818A (en) * | 2020-01-18 | 2020-06-09 | 苏州浪潮智能科技有限公司 | Road sign identification method, system and device |
CN111428556A (en) * | 2020-02-17 | 2020-07-17 | 浙江树人学院(浙江树人大学) | Traffic sign recognition method based on capsule neural network |
CN111444847A (en) * | 2020-03-27 | 2020-07-24 | 广西综合交通大数据研究院 | Traffic sign detection and identification method, system, device and storage medium |
CN111476284A (en) * | 2020-04-01 | 2020-07-31 | 网易(杭州)网络有限公司 | Image recognition model training method, image recognition model training device, image recognition method, image recognition device and electronic equipment |
CN111680706A (en) * | 2020-06-17 | 2020-09-18 | 南开大学 | Double-channel output contour detection method based on coding and decoding structure |
WO2022033580A1 (en) * | 2020-08-14 | 2022-02-17 | 北京至真互联网技术有限公司 | Retinal vessel arteriovenous distinguishing method, apparatus and device |
WO2022099598A1 (en) * | 2020-11-13 | 2022-05-19 | 浙江大学 | Video dynamic target detection method based on relative statistical features of image pixels |
CN113255555A (en) * | 2021-06-04 | 2021-08-13 | 清华大学 | Method, system, processing equipment and storage medium for identifying Chinese traffic sign board |
CN113269161A (en) * | 2021-07-16 | 2021-08-17 | 四川九通智路科技有限公司 | Traffic signboard detection method based on deep learning |
CN113838011A (en) * | 2021-09-13 | 2021-12-24 | 中南大学 | Rock block degree and/or distribution rule obtaining method, system, terminal and readable storage medium based on digital image color gradient |
CN114267025A (en) * | 2021-12-07 | 2022-04-01 | 天津大学 | Traffic sign detection method based on high-resolution network and light-weight attention mechanism |
CN114037960A (en) * | 2022-01-11 | 2022-02-11 | 合肥金星智控科技股份有限公司 | Flap valve state identification method and system based on machine vision |
CN114494870A (en) * | 2022-01-21 | 2022-05-13 | 山东科技大学 | Double-time-phase remote sensing image change detection method, model construction method and device |
Non-Patent Citations (9)
Title |
---|
CHANGJIANG JIN等: ""A Design and Implementation of Mobile Video surveillance Terminal Base on ARM"", 《PROCEDIA COMPUTER SCIENCE》 * |
WANG W等: ""Lightweight deep network for traffic sign classification"", 《ANNALS OF TELECOMMUNICATIONS》 * |
WENXIANG ZHANG等: ""Using the Center loss function to improve deep learning performance for EEG Signal classification"", 《2018 TENTH INTERNATIONAL CONFERENCE ON ADVANCED COMPUTATIONAL INTELLIGENCE》 * |
何锐波等: ""一种改进的深度学习的道路交通标识识别算法"", 《智能系统学报》 * |
党宏社等: "基于改进神经网络的交通标志识别", 《西安文理学院学报(自然科学版)》 * |
吴思远等: ""一种精确的图像边缘检测法"", 《陕西理工学院学报》 * |
张超等: ""一种灰度化混合法在集装箱箱号识别中的运用"", 《计算机与现代化》 * |
李军等: ""基于空间通道注意力机制与多尺度融合的交通标志识别研究"", 《南京邮电大学学报(自然科学版)》 * |
林楠等: ""基于卷积神经网络的交通路标识别"", 《计算机与现代化》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116071778A (en) * | 2023-03-31 | 2023-05-05 | 成都运荔枝科技有限公司 | Cold chain food warehouse management method |
CN116071778B (en) * | 2023-03-31 | 2023-06-27 | 成都运荔枝科技有限公司 | Cold chain food warehouse management method |
CN116188585A (en) * | 2023-04-24 | 2023-05-30 | 成都垣景科技有限公司 | Mountain area photovoltaic target positioning method based on unmanned aerial vehicle photogrammetry |
CN116403094A (en) * | 2023-06-08 | 2023-07-07 | 成都菁蓉联创科技有限公司 | Embedded image recognition method and system |
CN116403094B (en) * | 2023-06-08 | 2023-08-22 | 成都菁蓉联创科技有限公司 | Embedded image recognition method and system |
CN117036923A (en) * | 2023-10-08 | 2023-11-10 | 广东海洋大学 | Underwater robot target detection method based on machine vision |
CN117036923B (en) * | 2023-10-08 | 2023-12-08 | 广东海洋大学 | Underwater robot target detection method based on machine vision |
Also Published As
Publication number | Publication date |
---|---|
CN114973207B (en) | 2022-10-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114973207B (en) | Road sign identification method based on target detection | |
CN109977812B (en) | Vehicle-mounted video target detection method based on deep learning | |
CN108596129B (en) | Vehicle line-crossing detection method based on intelligent video analysis technology | |
US10223597B2 (en) | Method and system for calculating passenger crowdedness degree | |
CN110097044B (en) | One-stage license plate detection and identification method based on deep learning | |
CN110969160B (en) | License plate image correction and recognition method and system based on deep learning | |
CN110866430B (en) | License plate recognition method and device | |
CN111814621A (en) | Multi-scale vehicle and pedestrian detection method and device based on attention mechanism | |
CN108491797A (en) | A kind of vehicle image precise search method based on big data | |
CN104978567A (en) | Vehicle detection method based on scenario classification | |
CN105825212A (en) | Distributed license plate recognition method based on Hadoop | |
CN113139521A (en) | Pedestrian boundary crossing monitoring method for electric power monitoring | |
CN113129336A (en) | End-to-end multi-vehicle tracking method, system and computer readable medium | |
CN113808166B (en) | Single-target tracking method based on clustering difference and depth twin convolutional neural network | |
CN112465854A (en) | Unmanned aerial vehicle tracking method based on anchor-free detection algorithm | |
CN110969164A (en) | Low-illumination imaging license plate recognition method and device based on deep learning end-to-end | |
CN113361467A (en) | License plate recognition method based on field adaptation | |
CN111259736B (en) | Real-time pedestrian detection method based on deep learning in complex environment | |
CN111241987B (en) | Multi-target model visual tracking method based on cost-sensitive three-branch decision | |
CN110008834B (en) | Steering wheel intervention detection and statistics method based on vision | |
CN107862341A (en) | A kind of vehicle checking method | |
CN111832497B (en) | Text detection post-processing method based on geometric features | |
CN112528994A (en) | Free-angle license plate detection method, license plate identification method and identification system | |
CN117037085A (en) | Vehicle identification and quantity statistics monitoring method based on improved YOLOv5 | |
CN116091964A (en) | High-order video scene analysis method and system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20221021 |