CN114973207B - Road sign identification method based on target detection - Google Patents
Road sign identification method based on target detection Download PDFInfo
- Publication number
- CN114973207B CN114973207B CN202210913244.3A CN202210913244A CN114973207B CN 114973207 B CN114973207 B CN 114973207B CN 202210913244 A CN202210913244 A CN 202210913244A CN 114973207 B CN114973207 B CN 114973207B
- Authority
- CN
- China
- Prior art keywords
- detection model
- target detection
- output
- input end
- road sign
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/56—Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
- G06V20/58—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
- G06V20/582—Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/09—Recognition of logos
Abstract
The invention discloses a road sign identification method based on target detection, which comprises the following steps: s1, collecting road sign images, and preprocessing the road sign images to obtain contour data; s2, extracting feature data of the contour data through an LSTM feature extraction module; s3, constructing the feature data and the corresponding labels into a training data set; s4, training the target detection model by adopting a training data set to obtain a trained target detection model; s5, processing the feature data of the road sign image to be recognized by adopting the trained target detection model to obtain a corresponding road sign type; the invention solves the problem of low identification accuracy of the existing target detection method.
Description
Technical Field
The invention relates to the technical field of image processing, in particular to a road sign identification method based on target detection.
Background
With the development of society, more and more people begin to pay attention to the development of unmanned technology, and as one of the components of comprehensive technology, the technology of identifying objects through algorithm detection has kept its own status. Because deep learning technology has made a certain breakthrough in recent years, the application technology of convolutional neural networks is more and more mature. Meanwhile, because related neural networks such as CNN have own unique advantages in the intelligent identification process, the development of a target detection algorithm combined with deep learning in recent years becomes an important development direction of the current identification detection algorithm. However, most of the existing target detection methods directly process the target image by using the CNN neural network, and the recognition accuracy is not high.
Disclosure of Invention
Aiming at the defects in the prior art, the road sign identification method based on target detection provided by the invention solves the problem of low identification accuracy of the existing target detection method.
In order to achieve the purpose of the invention, the invention adopts the technical scheme that: a road sign identification method based on target detection comprises the following steps:
s1, collecting road sign images, and preprocessing the road sign images to obtain contour data;
s2, extracting feature data of the contour data through an LSTM feature extraction module;
s3, constructing the feature data and the corresponding labels into a training data set;
s4, training the target detection model by adopting a training data set to obtain a trained target detection model;
and S5, processing the feature data of the road sign image to be recognized by adopting the trained target detection model to obtain the corresponding road sign type.
Further, the step S1 includes the following sub-steps:
s11, collecting road sign images;
and S12, extracting the contour of the road sign image to obtain contour data.
Further, the step S12 includes the following sub-steps:
s121, selecting any pixel point from the road sign image as a standard point;
s122, calculating the color distance between other pixel points in the road sign image and the standard point to obtain a plurality of color distance values;
s123, carrying out gray processing on the pixel points corresponding to the color distance values lower than the color threshold, and giving the same gray value to the pixel points corresponding to the color distance values lower than the color threshold;
s124, taking the pixel point corresponding to the color distance value higher than the color threshold value as a new standard point;
s125, calculating the color distance value between the pixel point of the rest part after graying and the new standard point, and skipping to the step S123 until the road sign image is grayed into a gray image with different gray value areas;
s126, selecting pixel points in a non-edge area in the gray image as undetermined points;
s127, judging whether the gray value of 9 pixel points close to the undetermined point is the same as the gray value of the undetermined point, if so, determining the undetermined point as a point to be deleted, and jumping to the step S128, otherwise, reserving the undetermined point, and jumping to the step S129;
s128, randomly finding a pixel point from the neighborhood of the point to be deleted as a new point to be deleted, and jumping to the step S127 until all pixel points in the non-edge area in all gray level images are traversed;
and S129, deleting the points to be deleted, wherein all the pixels of the points to be deleted and the edge area form contour data.
The beneficial effects of the above further scheme are as follows: the method comprises the steps that the color presenting degrees of the road sign image are slightly different due to the influence of illumination, a gray value is given to the area with the same type of color by setting a color threshold value, the color range to which the pixel point does not belong is found in an iterative mode, and therefore another gray value is given to the pixel point, the graying of the road sign image is realized, and each color area is grayed into different gray values; by searching 9 adjacent pixel points around the undetermined point, if the gray values of the 9 pixel points are the same as the gray value of the undetermined point, the non-contour area is indicated, and finally, all the points in the non-contour area need to be deleted, and the rest is contour data.
Further, the calculation formula of the color distance value in step S122 is:
wherein the content of the first and second substances,is the color distance value between the pixel point and the standard point,of pixel colourThe passage is provided with a plurality of channels,of standard dot colourThe passage is provided with a plurality of channels,of pixel colourThe passage is provided with a plurality of channels,of standard dot colourThe passage is provided with a plurality of channels,of pixel colourThe passage is provided with a plurality of channels,of standard dot colourA channel.
Further, the structure of the target detection model in step S4 includes: the first residual block, the second residual block, the third residual block, the fourth residual block, the first Maxpool, the second Maxpool, the third Maxpool, the Concat layer, the first Conv, the BN layer and the second Conv;
the input end of the first residual block is used as the input end of the target detection model, and the output end of the first residual block is respectively connected with the input end of the second residual block and the input end of the first Maxpool; the output end of the second residual block is respectively connected with the input end of a third residual block and the input end of a second Maxpool; the output end of the third residual block is respectively connected with the input end of the fourth residual block and the input end of the third Maxpool; a first input end of the Concat layer is connected with an output end of the first Maxpool, a second input end of the Concat layer is connected with an output end of the second Maxpool, a third input end of the Concat layer is connected with an output end of the third Maxpool, a fourth input end of the Concat layer is connected with an output end of the fourth residual block, and an output end of the Concat layer is connected with an input end of the first Conv; the input end of the BN layer is connected with the output layer of the first Conv, and the output end of the BN layer is connected with the input end of the second Conv; and the output end of the second Conv is used as the output end of the target detection model.
Further, the window size of the first Maxpool is 3 x 3; the window size of the second Maxpool is 5 x 5; the window size of the third Maxpool is 7 × 7.
The beneficial effects of the above further scheme are as follows: the characteristics are extracted layer by layer through the first residual block, the second residual block, the third residual block and the fourth residual block, the extracted characteristics of each layer are input into the maximum pooling layer, the characteristics of different degrees are reserved through windows of different sizes, the characteristics are collected through the Concat layer, the richness of the characteristics is reserved to the maximum degree, and the accuracy of target identification is improved.
Further, the loss function of the training process in step S4 is:
wherein the content of the first and second substances,in order to obtain the value of the loss,is the actual output of the object detection model,is the predicted output of the object detection model,the abscissa of the geometric center of the region output for prediction of the object detection model,the ordinate of the geometric center of the region that is the prediction output of the object detection model,is the abscissa of the geometric center of the region of the actual output of the object detection model,is the ordinate of the geometric center of the region of the actual output of the object detection model,to cover the actual output of the target detection modelAnd the predicted output of the object detection modelThe linear distance between the two farthest pixels in the region of (a),detecting the actual output of the model for the objectAnd the prediction output of the target detection modelOverlap rate of change.
The beneficial effects of the above further scheme are as follows: according to the method, the difference between the actual output and the predicted output in the training process is measured through the intersection of the actual output and the predicted output, the union ratio of the actual output and the predicted output, the ratio of the distance between the actual output center and the actual output center to the linear distance between the two farthest pixel points in the area covering the actual output and the predicted output, and the change rate of the overlapping area, so that the actual output approaches the predicted output.
In conclusion, the beneficial effects of the invention are as follows: according to the method, the road sign image is preprocessed, the key outline data is extracted, the LSTM characteristic extraction module is used for extracting the characteristic data, the characteristic data and the corresponding label are used for training the target detection model, on one hand, the data volume is reduced, on the other hand, the target detection model accurately captures the corresponding relation between the characteristic data and the corresponding label through the characteristic data and the corresponding label, and the target identification accuracy is improved.
Drawings
FIG. 1 is a flow chart of a landmark identification method based on target detection;
FIG. 2 is a schematic diagram of the structure of a cell unit of the LSTM feature extraction module;
fig. 3 is a schematic structural diagram of an object detection model.
Detailed Description
The following description of the embodiments of the present invention is provided to facilitate the understanding of the present invention by those skilled in the art, but it should be understood that the present invention is not limited to the scope of the embodiments, and it will be apparent to those skilled in the art that various changes may be made without departing from the spirit and scope of the invention as defined and defined in the appended claims, and all matters produced by the invention using the inventive concept are protected.
As shown in fig. 1, a landmark identification method based on target detection includes the following steps:
s1, collecting road sign images, and preprocessing the road sign images to obtain contour data;
the step S1 comprises the following sub-steps:
s11, collecting road sign images;
and S12, extracting the contour of the road sign image to obtain contour data.
The step S12 includes the following sub-steps:
s121, selecting any pixel point from the road sign image as a standard point;
s122, calculating the color distance between other pixel points in the road sign image and the standard point to obtain a plurality of color distance values;
the formula for calculating the color distance value in step S122 is:
wherein, the first and the second end of the pipe are connected with each other,is the color distance value between the pixel point and the standard point,of pixel colourThe passage is provided with a plurality of channels,of standard dot colourThe passage is provided with a plurality of channels,of pixel colourThe passage is provided with a plurality of channels,of standard dot colourThe passage is provided with a plurality of channels,of pixel colourThe passage is provided with a plurality of channels,of standard dot colourA channel.
S123, carrying out gray processing on the pixel points corresponding to the color distance values lower than the color threshold value, and giving the same gray value to the pixel points corresponding to the color distance values lower than the color threshold value;
s124, taking the pixel point corresponding to the color distance value higher than the color threshold value as a new standard point;
s125, calculating the color distance value between the pixel point of the rest part after graying and the new standard point, and skipping to the step S123 until the road sign image is grayed into a gray image with different gray value areas;
s126, selecting pixel points in a non-edge area in the gray image as undetermined points;
s127, judging whether the gray value of the adjacent 9 pixel points of the undetermined point is the same as the gray value of the undetermined point, if so, determining the undetermined point as a point to be deleted, and jumping to the step S128, otherwise, reserving the undetermined point and jumping to the step S129;
s128, randomly finding a pixel point from the neighborhood of the point to be deleted to serve as a new point to be deleted, and skipping to the step S127 until all pixel points in the non-edge area in all the gray level images are traversed;
and S129, deleting the points to be deleted, wherein all the pixels of the points to be deleted and the edge area form contour data.
The road sign image has slightly different color presenting degrees due to the influence of illumination, one gray value is given to the area of the same type of color by setting a color threshold value, and the color range to which the pixel point does not belong is found in an iterative mode, so that the other gray value is given to the pixel point, the graying of the road sign image is realized, and each color area is grayed into different gray values; by searching 9 adjacent pixel points around the undetermined point, if the gray values of the 9 pixel points are the same as the gray value of the undetermined point, the non-contour area is indicated, and finally, all the points in the non-contour area need to be deleted, and the rest is contour data.
S2, extracting feature data of the contour data through an LSTM feature extraction module;
fig. 2 shows a cell unit of the LSTM feature extraction module in step S2, where input and output relationships of the cell unit are as follows:
wherein the content of the first and second substances,in order to reset the output of the gate,is composed ofThe output of the cell unit at the time of day,is the input of the cell unit and is,in order to reset the weight of the gate,in order to reset the bias of the gate,is the output of the input gate or gates,to enter the weight of the gate(s),in order to input the offset of the gate,in order to memorize the output of the gate,in order to memorize the weight of the gate,in order to memorize the offset of the door,is composed ofThe output of the cell unit at the time of day,to be the weight of the output gate,in order to output the offset of the gate,for the purpose of the hyperbolic tangent activation function,is an activation function.
S3, constructing the feature data and the corresponding labels into a training data set;
s4, training the target detection model by adopting a training data set to obtain a trained target detection model;
as shown in fig. 3, the structure of the target detection model in step S4 includes: a first residual block, a second residual block, a third residual block, a fourth residual block, a first Maxpool, a second Maxpool, a third Maxpool, a Concat layer, a first Conv, a BN layer, and a second Conv;
the input end of the first residual block is used as the input end of the target detection model, and the output end of the first residual block is respectively connected with the input end of the second residual block and the input end of the first Maxpool; the output end of the second residual block is respectively connected with the input end of the third residual block and the input end of the second Maxpool; the output end of the third residual block is respectively connected with the input end of the fourth residual block and the input end of the third Maxpool; a first input end of the Concat layer is connected with an output end of the first Maxpool, a second input end of the Concat layer is connected with an output end of the second Maxpool, a third input end of the Concat layer is connected with an output end of the third Maxpool, a fourth input end of the Concat layer is connected with an output end of the fourth residual block, and an output end of the Concat layer is connected with an input end of the first Conv; the input end of the BN layer is connected with the output layer of the first Conv, and the output end of the BN layer is connected with the input end of the second Conv; and the output end of the second Conv is used as the output end of the target detection model.
The window size of the first Maxpool is 3 x 3; the window size of the second Maxpool is 5 x 5; the window size of the third Maxpool is 7 × 7.
The characteristics are extracted layer by layer through the first residual block, the second residual block, the third residual block and the fourth residual block, the extracted characteristics of each layer are input into the maximum pooling layer, the characteristics of different degrees are reserved through windows of different sizes, the characteristics are collected through the Concat layer, the richness of the characteristics is reserved to the maximum degree, and the accuracy of target identification is improved.
The loss function of the training process in step S4 is:
wherein the content of the first and second substances,in order to obtain the value of the loss,is the actual output of the object detection model,for the predicted output of the object detection model,the abscissa of the geometric center of the region that is the prediction output of the object detection model,region table for prediction output of target detection modelThe ordinate of which center is,is the abscissa of the geometric center of the region of the actual output of the object detection model,is the ordinate of the geometric center of the region of the actual output of the object detection model,to cover the actual output of the target detection modelAnd the predicted output of the object detection modelThe linear distance between the two farthest pixels in the region of (a),detecting the actual output of the model for the objectAnd the prediction output of the target detection modelThe area of (2) is the image area formed by the pixel data.
According to the method, the difference between the actual output and the predicted output in the training process is measured through the intersection of the actual output and the predicted output, the union ratio of the actual output and the predicted output, the ratio of the distance between the actual output center and the actual output center to the linear distance between the two farthest pixel points in the area covering the actual output and the predicted output, and the change rate of the overlapping area, so that the actual output approaches the predicted output.
And S5, processing the feature data of the road sign image to be recognized by adopting the trained target detection model to obtain the corresponding road sign type.
In conclusion, the beneficial effects of the invention are as follows: according to the method, the road sign image is preprocessed, the key outline data is extracted, the LSTM characteristic extraction module is used for extracting the characteristic data, the characteristic data and the corresponding label are used for training the target detection model, on one hand, the data volume is reduced, on the other hand, the target detection model accurately captures the corresponding relation between the characteristic data and the corresponding label through the characteristic data and the corresponding label, and the target identification accuracy is improved.
Claims (3)
1. A road sign identification method based on target detection is characterized by comprising the following steps:
s1, collecting road sign images, and preprocessing the road sign images to obtain contour data;
s2, extracting feature data of the contour data through an LSTM feature extraction module;
s3, constructing the feature data and the corresponding labels into a training data set;
s4, training the target detection model by adopting a training data set to obtain a trained target detection model;
s5, processing the feature data of the road sign image to be recognized by adopting the trained target detection model to obtain a corresponding road sign type;
the step S1 comprises the following sub-steps:
s11, collecting road sign images;
s12, extracting the contour of the road sign image to obtain contour data;
the step S12 comprises the following sub-steps:
s121, selecting any pixel point from the road sign image as a standard point;
s122, calculating the color distance between other pixel points in the road sign image and the standard point to obtain a plurality of color distance values;
s123, carrying out gray processing on the pixel points corresponding to the color distance values lower than the color threshold, and giving the same gray value to the pixel points corresponding to the color distance values lower than the color threshold;
s124, taking the pixel point corresponding to the color distance value higher than the color threshold value as a new standard point;
s125, calculating the color distance value between the pixel point of the rest part after graying and the new standard point, and jumping to the step S123 until the road sign image is grayed into a grayscale image with different grayscale value areas;
s126, selecting pixel points in a non-edge area in the gray image as undetermined points;
s127, judging whether the gray value of the adjacent 9 pixel points of the undetermined point is the same as the gray value of the undetermined point, if so, determining the undetermined point as a point to be deleted, and jumping to the step S128, otherwise, reserving the undetermined point and jumping to the step S129;
s128, randomly finding a pixel point from the neighborhood of the point to be deleted as a new point to be deleted, and jumping to the step S127 until all pixel points in the non-edge area in all gray level images are traversed;
s129, deleting the points to be deleted, wherein all the points to be deleted and the pixels of the edge area form contour data;
the calculation formula of the color distance value in step S122 is:
wherein d is the color distance value between the pixel point and the standard point, P 1,R R channel, P, of pixel color 2,R R channel, P, for standard dot color 1,G G channel, P, of pixel color 2,G G channel, P, of standard dot color 1,B B channel, P, of pixel color 2,B A B channel of standard dot color;
the loss function of the training process in step S4 is:
wherein L is a loss value,is the actual output of the target detection model, y is the predicted output of the target detection model, x * Abscissa, y, of geometric center of region output for prediction of object detection model * The ordinate of the geometric center of the region output for prediction of the object detection model,the abscissa of the geometric center of the region that is the actual output of the object detection model,ordinate of geometric center of region as actual output of target detection model, di as actual output of target detection modelAnd the linear distance between the farthest two pixel points in the region of the predicted output y of the target detection model, and v is the actual output of the target detection modelAnd the area of the predicted output y of the object detection model.
2. A landmark recognition method based on object detection according to claim 1, wherein the structure of the object detection model in step S4 includes: the first residual block, the second residual block, the third residual block, the fourth residual block, the first Maxpool, the second Maxpool, the third Maxpool, the Concat layer, the first Conv, the BN layer and the second Conv;
the input end of the first residual block is used as the input end of the target detection model, and the output end of the first residual block is respectively connected with the input end of the second residual block and the input end of the first Maxpool; the output end of the second residual block is respectively connected with the input end of the third residual block and the input end of the second Maxpool; the output end of the third residual block is respectively connected with the input end of the fourth residual block and the input end of the third Maxpool; a first input end of the Concat layer is connected with an output end of the first Maxpool, a second input end of the Concat layer is connected with an output end of the second Maxpool, a third input end of the Concat layer is connected with an output end of the third Maxpool, a fourth input end of the Concat layer is connected with an output end of the fourth residual block, and an output end of the Concat layer is connected with an input end of the first Conv; the input end of the BN layer is connected with the output layer of the first Conv, and the output end of the BN layer is connected with the input end of the second Conv; and the output end of the second Conv is used as the output end of the target detection model.
3. A method as claimed in claim 2, wherein the window size of the first Maxpool is 3 x 3; the window size of the second Maxpool is 5 x 5; the third Maxpool has a window size of 7 × 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210913244.3A CN114973207B (en) | 2022-08-01 | 2022-08-01 | Road sign identification method based on target detection |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210913244.3A CN114973207B (en) | 2022-08-01 | 2022-08-01 | Road sign identification method based on target detection |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114973207A CN114973207A (en) | 2022-08-30 |
CN114973207B true CN114973207B (en) | 2022-10-21 |
Family
ID=82970100
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210913244.3A Active CN114973207B (en) | 2022-08-01 | 2022-08-01 | Road sign identification method based on target detection |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114973207B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116071778B (en) * | 2023-03-31 | 2023-06-27 | 成都运荔枝科技有限公司 | Cold chain food warehouse management method |
CN116188585B (en) * | 2023-04-24 | 2023-07-11 | 成都垣景科技有限公司 | Mountain area photovoltaic target positioning method based on unmanned aerial vehicle photogrammetry |
CN116403094B (en) * | 2023-06-08 | 2023-08-22 | 成都菁蓉联创科技有限公司 | Embedded image recognition method and system |
CN117036923B (en) * | 2023-10-08 | 2023-12-08 | 广东海洋大学 | Underwater robot target detection method based on machine vision |
Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106529482A (en) * | 2016-11-14 | 2017-03-22 | 叶瀚礼 | Traffic road sign identification method adopting set distance |
CN107122701A (en) * | 2017-03-03 | 2017-09-01 | 华南理工大学 | A kind of traffic route sign based on saliency and deep learning |
CN108664969A (en) * | 2018-04-28 | 2018-10-16 | 西安电子科技大学 | Landmark identification method based on condition random field |
CN110414417A (en) * | 2019-07-25 | 2019-11-05 | 电子科技大学 | A kind of traffic mark board recognition methods based on multi-level Fusion multi-scale prediction |
CN110929697A (en) * | 2019-12-17 | 2020-03-27 | 中国人民解放军海军航空大学 | Neural network target identification method and system based on residual error structure |
CN111259818A (en) * | 2020-01-18 | 2020-06-09 | 苏州浪潮智能科技有限公司 | Road sign identification method, system and device |
CN111428556A (en) * | 2020-02-17 | 2020-07-17 | 浙江树人学院(浙江树人大学) | Traffic sign recognition method based on capsule neural network |
CN111444847A (en) * | 2020-03-27 | 2020-07-24 | 广西综合交通大数据研究院 | Traffic sign detection and identification method, system, device and storage medium |
CN111476284A (en) * | 2020-04-01 | 2020-07-31 | 网易(杭州)网络有限公司 | Image recognition model training method, image recognition model training device, image recognition method, image recognition device and electronic equipment |
WO2020173022A1 (en) * | 2019-02-25 | 2020-09-03 | 平安科技(深圳)有限公司 | Vehicle violation identifying method, server and storage medium |
CN113255555A (en) * | 2021-06-04 | 2021-08-13 | 清华大学 | Method, system, processing equipment and storage medium for identifying Chinese traffic sign board |
CN113269161A (en) * | 2021-07-16 | 2021-08-17 | 四川九通智路科技有限公司 | Traffic signboard detection method based on deep learning |
CN114037960A (en) * | 2022-01-11 | 2022-02-11 | 合肥金星智控科技股份有限公司 | Flap valve state identification method and system based on machine vision |
WO2022033580A1 (en) * | 2020-08-14 | 2022-02-17 | 北京至真互联网技术有限公司 | Retinal vessel arteriovenous distinguishing method, apparatus and device |
CN114267025A (en) * | 2021-12-07 | 2022-04-01 | 天津大学 | Traffic sign detection method based on high-resolution network and light-weight attention mechanism |
CN114494870A (en) * | 2022-01-21 | 2022-05-13 | 山东科技大学 | Double-time-phase remote sensing image change detection method, model construction method and device |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104200431A (en) * | 2014-08-21 | 2014-12-10 | 浙江宇视科技有限公司 | Processing method and processing device of image graying |
US11927965B2 (en) * | 2016-02-29 | 2024-03-12 | AI Incorporated | Obstacle recognition method for autonomous robots |
US11048985B2 (en) * | 2019-06-12 | 2021-06-29 | Wipro Limited | Method and system for classifying an object in input data using artificial neural network model |
CN111680706B (en) * | 2020-06-17 | 2023-06-23 | 南开大学 | Dual-channel output contour detection method based on coding and decoding structure |
US20230289979A1 (en) * | 2020-11-13 | 2023-09-14 | Zhejiang University | A method for video moving object detection based on relative statistical characteristics of image pixels |
CN113838011A (en) * | 2021-09-13 | 2021-12-24 | 中南大学 | Rock block degree and/or distribution rule obtaining method, system, terminal and readable storage medium based on digital image color gradient |
-
2022
- 2022-08-01 CN CN202210913244.3A patent/CN114973207B/en active Active
Patent Citations (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106529482A (en) * | 2016-11-14 | 2017-03-22 | 叶瀚礼 | Traffic road sign identification method adopting set distance |
CN107122701A (en) * | 2017-03-03 | 2017-09-01 | 华南理工大学 | A kind of traffic route sign based on saliency and deep learning |
CN108664969A (en) * | 2018-04-28 | 2018-10-16 | 西安电子科技大学 | Landmark identification method based on condition random field |
WO2020173022A1 (en) * | 2019-02-25 | 2020-09-03 | 平安科技(深圳)有限公司 | Vehicle violation identifying method, server and storage medium |
CN110414417A (en) * | 2019-07-25 | 2019-11-05 | 电子科技大学 | A kind of traffic mark board recognition methods based on multi-level Fusion multi-scale prediction |
CN110929697A (en) * | 2019-12-17 | 2020-03-27 | 中国人民解放军海军航空大学 | Neural network target identification method and system based on residual error structure |
CN111259818A (en) * | 2020-01-18 | 2020-06-09 | 苏州浪潮智能科技有限公司 | Road sign identification method, system and device |
CN111428556A (en) * | 2020-02-17 | 2020-07-17 | 浙江树人学院(浙江树人大学) | Traffic sign recognition method based on capsule neural network |
CN111444847A (en) * | 2020-03-27 | 2020-07-24 | 广西综合交通大数据研究院 | Traffic sign detection and identification method, system, device and storage medium |
CN111476284A (en) * | 2020-04-01 | 2020-07-31 | 网易(杭州)网络有限公司 | Image recognition model training method, image recognition model training device, image recognition method, image recognition device and electronic equipment |
WO2022033580A1 (en) * | 2020-08-14 | 2022-02-17 | 北京至真互联网技术有限公司 | Retinal vessel arteriovenous distinguishing method, apparatus and device |
CN113255555A (en) * | 2021-06-04 | 2021-08-13 | 清华大学 | Method, system, processing equipment and storage medium for identifying Chinese traffic sign board |
CN113269161A (en) * | 2021-07-16 | 2021-08-17 | 四川九通智路科技有限公司 | Traffic signboard detection method based on deep learning |
CN114267025A (en) * | 2021-12-07 | 2022-04-01 | 天津大学 | Traffic sign detection method based on high-resolution network and light-weight attention mechanism |
CN114037960A (en) * | 2022-01-11 | 2022-02-11 | 合肥金星智控科技股份有限公司 | Flap valve state identification method and system based on machine vision |
CN114494870A (en) * | 2022-01-21 | 2022-05-13 | 山东科技大学 | Double-time-phase remote sensing image change detection method, model construction method and device |
Non-Patent Citations (8)
Title |
---|
"A Design and Implementation of Mobile Video surveillance Terminal Base on ARM";changjiang jin等;《Procedia Computer Science》;20171231;第107卷;第498-502页 * |
"Lightweight deep network for traffic sign classification";Wang W等;《Annals of Telecommunications》;20201231;第75卷(第8期);第369-379页 * |
"Using the Center loss function to improve deep learning performance for EEG Signal classification";Wenxiang Zhang等;《2018 Tenth International Conference on Advanced Computational Intelligence》;20180611;第234-241页 * |
"一种改进的深度学习的道路交通标识识别算法";何锐波等;《智能系统学报》;20201130;第15卷(第6期);第1122-1123页第1.2节 * |
"一种灰度化混合法在集装箱箱号识别中的运用";张超等;《计算机与现代化》;20191231(第5期);第41-45页 * |
"一种精确的图像边缘检测法";吴思远等;《陕西理工学院学报》;20071231;第23卷(第4期);第32-35页 * |
"基于卷积神经网络的交通路标识别";林楠等;《计算机与现代化》;20181231(第7期);第103-113页 * |
"基于空间通道注意力机制与多尺度融合的交通标志识别研究";李军等;《南京邮电大学学报(自然科学版)》;20220430;第42卷(第2期);第93-102页 * |
Also Published As
Publication number | Publication date |
---|---|
CN114973207A (en) | 2022-08-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114973207B (en) | Road sign identification method based on target detection | |
CN109977812B (en) | Vehicle-mounted video target detection method based on deep learning | |
CN108596129B (en) | Vehicle line-crossing detection method based on intelligent video analysis technology | |
CN110097044B (en) | One-stage license plate detection and identification method based on deep learning | |
CN110969160B (en) | License plate image correction and recognition method and system based on deep learning | |
CN110866430B (en) | License plate recognition method and device | |
CN113139521B (en) | Pedestrian boundary crossing monitoring method for electric power monitoring | |
CN104978567A (en) | Vehicle detection method based on scenario classification | |
Yang et al. | A vehicle license plate recognition system based on fixed color collocation | |
CN112651293B (en) | Video detection method for road illegal spreading event | |
CN110969164A (en) | Low-illumination imaging license plate recognition method and device based on deep learning end-to-end | |
CN111241987B (en) | Multi-target model visual tracking method based on cost-sensitive three-branch decision | |
CN107862341A (en) | A kind of vehicle checking method | |
CN113808166B (en) | Single-target tracking method based on clustering difference and depth twin convolutional neural network | |
CN111832497B (en) | Text detection post-processing method based on geometric features | |
CN112528994B (en) | Free angle license plate detection method, license plate recognition method and recognition system | |
CN113361467A (en) | License plate recognition method based on field adaptation | |
CN113129336A (en) | End-to-end multi-vehicle tracking method, system and computer readable medium | |
CN112465854A (en) | Unmanned aerial vehicle tracking method based on anchor-free detection algorithm | |
CN110008834B (en) | Steering wheel intervention detection and statistics method based on vision | |
CN111126303A (en) | Multi-parking-space detection method for intelligent parking | |
CN116091964A (en) | High-order video scene analysis method and system | |
CN113313008B (en) | Target and identification tracking method based on YOLOv3 network and mean shift | |
CN104504385A (en) | Recognition method of handwritten connected numerical string | |
CN110084190B (en) | Real-time unstructured road detection method under severe illumination environment based on ANN |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |