CN112712036A - Traffic sign recognition method and device, electronic equipment and computer storage medium - Google Patents

Traffic sign recognition method and device, electronic equipment and computer storage medium Download PDF

Info

Publication number
CN112712036A
CN112712036A CN202011632849.2A CN202011632849A CN112712036A CN 112712036 A CN112712036 A CN 112712036A CN 202011632849 A CN202011632849 A CN 202011632849A CN 112712036 A CN112712036 A CN 112712036A
Authority
CN
China
Prior art keywords
traffic sign
data
feature map
target
residual error
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011632849.2A
Other languages
Chinese (zh)
Inventor
李晓欢
马新舒
陈倩
唐欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangxi Comprehensive Transportation Big Data Research Institute
Guilin University of Electronic Technology
Original Assignee
Guangxi Comprehensive Transportation Big Data Research Institute
Guilin University of Electronic Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangxi Comprehensive Transportation Big Data Research Institute, Guilin University of Electronic Technology filed Critical Guangxi Comprehensive Transportation Big Data Research Institute
Priority to CN202011632849.2A priority Critical patent/CN112712036A/en
Publication of CN112712036A publication Critical patent/CN112712036A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/56Context or environment of the image exterior to a vehicle by using sensors mounted on the vehicle
    • G06V20/58Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads
    • G06V20/582Recognition of moving objects or obstacles, e.g. vehicles or pedestrians; Recognition of traffic objects, e.g. traffic signs, traffic lights or roads of traffic signs
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Probability & Statistics with Applications (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Traffic Control Systems (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure provides a traffic sign identification method and device, electronic equipment and a computer storage medium, and relates to the technical field of image identification. The method comprises the following steps: clustering the traffic signs in the traffic sign data set by adopting a preset clustering algorithm, and determining the sizes of various traffic signs; and identifying the image data to be identified by adopting a pre-trained target detection model, and determining a target traffic sign corresponding to the size. According to the method and the device, the influence of invalid data on the clustering center is eliminated through a preset clustering algorithm, the matching degree between the prior frame and the traffic sign is greatly improved, the complexity of a training network is favorably reduced, the network training time is shortened, the detection precision of a model is favorably improved, and the feature extraction capability of a shallow network is enhanced by adding a residual error structure; by increasing the prediction scale and the number of anchor frames, the extracted shallow characteristic diagram can be used for prediction, and the accuracy of detecting the traffic sign is improved.

Description

Traffic sign recognition method and device, electronic equipment and computer storage medium
Technical Field
The present disclosure relates to the field of image recognition technologies, and in particular, to a method and an apparatus for recognizing a traffic sign, an electronic device, and a computer storage medium.
Background
In recent years, unmanned driving is more and more concerned globally, and in order to ensure that an unmanned vehicle can safely go on the road, the information of the surrounding environment needs to be sensed, in the prior art, the actual distance between a front target and the vehicle is determined through a laser radar, but the type of the target cannot be determined, so that an auxiliary camera is needed to detect all targets on the road surface accurately in real time, the farther the target can be identified, the response time of the vehicle can be increased, braking can be timely avoided, collision can be avoided, traffic sign detection and identification are an important part of the unmanned sensing part, and an automatic driving system can timely make correct decision information by acquiring the information of the type and the distance of a traffic sign.
The pixels of the traffic sign in real life account for 0.001% -5% of a visual field image, the size is small, the occupied pixels are few, the characteristics do not obviously cause the traffic sign to be difficult to detect compared with a large target, meanwhile, the traffic sign is also influenced by weather conditions, the traffic sign is difficult to identify due to severe weather conditions, such as fog, dim weather, complex scenes and the like, and the existing deep learning detection algorithm is difficult to effectively and accurately detect and identify the small target traffic sign in a real scene.
Therefore, the accuracy of the identification of the small target traffic sign in the prior art is not high, and improvement is needed.
Disclosure of Invention
The purpose of the present disclosure is to solve at least one of the above technical drawbacks, in particular, the technical drawback of the prior art that the accuracy of the identification of small target traffic signs is not high.
In a first aspect, a traffic sign recognition method is provided, which includes:
acquiring a preset traffic sign data set, wherein the traffic sign data set comprises a training set and a verification set;
clustering the traffic signs in the traffic sign data set by adopting a preset clustering algorithm, and determining the sizes of all types of traffic signs;
and acquiring image data to be recognized, recognizing the image data to be recognized by adopting a pre-trained target detection model based on the size, and determining a target traffic sign corresponding to the size.
As a possible implementation manner of the present disclosure, the clustering the traffic signs in the traffic sign data set by using a preset clustering algorithm to determine the sizes of the various types of traffic signs includes:
screening valid data in the traffic sign data set;
dividing the effective data into a preset number of categories, and calculating target frame intersection of the effective data of each category and comparing the target frame intersection with the clustering center of each category;
and reclassifying the effective data with the target frame intersection ratio larger than a preset threshold until the center point of the effective data is overlapped with the clustering center of the category of the effective data, determining a plurality of target categories, and determining the size of the traffic sign corresponding to each target category.
As a possible embodiment of the present disclosure, the screening valid data in the traffic sign data set includes:
acquiring coordinate data of each data in the traffic sign data set;
for one datum, the coordinate of the lower left corner on the X-axis is XminThe coordinate of the lower left corner on the Y-axis is YminThe coordinate of the upper right corner on the X-axis is XmaxThe coordinate of the upper right corner on the Y axis is Ymax
Calculating Dx=Xmax-Xmin,Dy=Ymax-YminIf D isx0 and/or DyIf 0, the data is invalid;
if D isxNot equal to 0 and DyNot equal to 0, then calculate
Figure BDA0002880488360000021
If Q is more than 0.2 and less than 1, the data is valid data.
As a possible embodiment of the present disclosure, the pre-trained target detection model is an improved Yolov3 target detection model, and the target detection model is obtained by training as follows:
inputting the training set into a first residual error module to obtain a first feature map; wherein the first residual block comprises a residual calculation;
inputting the first feature map into a second residual error module to obtain a second feature map; wherein the second residual block comprises two residual calculations;
inputting the second feature map into a third residual error module to obtain a third feature map, wherein the third residual error block comprises four residual error calculations;
inputting the third feature map into a fourth residual error module to obtain a fourth feature map, wherein the fourth residual error module comprises eight residual error calculations;
inputting the fourth feature map into a fifth residual error module to obtain a fifth feature map, wherein the fifth residual error module comprises eight residual error calculations;
inputting the fifth feature map into a sixth residual error module to obtain a sixth feature map, wherein the sixth residual error module comprises four residual error calculations;
performing up-sampling on the sixth feature map for three times, performing up-sampling on the fifth feature map for two times, performing up-sampling on the fourth feature map for one time, and fusing the fourth feature map and the third feature map to obtain a target feature map;
and identifying the traffic signs in the target characteristic diagram, and calculating the loss degree of the traffic signs in the target characteristic diagram and the traffic signs in the training set through a preset loss function until the loss degree is in a preset range.
As a possible embodiment of the present disclosure, the preset loss function is:
Loss=LGIOU+ErrorIOU+Errorcls
wherein L isGIOU=1-GIOU,
Figure BDA0002880488360000031
SCTo comprise Sg,SpThe area of the smallest rectangular box inside,
Figure BDA0002880488360000032
I=Sg∩Sp,U=Sg∪sp,Sgis the real frame area, SpIs the predicted box area; wherein the content of the first and second substances,
Figure BDA0002880488360000033
Figure BDA0002880488360000034
wherein s is2Is the number of grids in the input image,
Figure BDA0002880488360000035
the j-th anchor box representing the i-th grid captures the target, 1 or 0. CiIs the confidence score of the actual box and,
Figure BDA0002880488360000036
is the confidence score of the prediction box.
As one possible embodiment of the present disclosure, the traffic sign data set is a TT100K data set, and the method further includes:
converting the format of the data in the TT100K into a target label format;
and dividing the data in the target label format into the training set and the verification set.
In a second aspect, there is provided a traffic sign recognition apparatus, the apparatus comprising:
the system comprises a data set acquisition module, a verification module and a data processing module, wherein the data set acquisition module is used for acquiring a preset traffic sign data set, and the traffic sign data set comprises a training set and a verification set;
the size acquisition module is used for clustering the traffic signs in the traffic sign data set by adopting a preset clustering algorithm and determining the sizes of all types of traffic signs;
and the identification module is used for acquiring image data to be identified, identifying the image data to be identified by adopting a pre-trained target detection model based on the size, and determining a target traffic sign corresponding to the size.
In a third aspect, an electronic device is provided, which includes:
a processor, a memory, and a bus;
the bus is used for connecting the processor and the memory;
the memory is used for storing operation instructions;
the processor is used for executing the traffic sign identification method by calling the operation instruction.
In a fourth aspect, a computer storage medium is provided that stores at least one instruction, at least one program, set of codes, or set of instructions that is loaded and executed by the processor to implement the above-mentioned traffic sign recognition method.
According to the method and the device, the influence of invalid labeling data on the clustering center is eliminated through a preset clustering algorithm, the matching degree between the prior frame and the traffic sign is greatly improved, the complexity of a training network is favorably reduced, the network training time is shortened, the detection precision of a model is favorably improved, and the feature extraction capability of a shallow network is enhanced by adding a residual error structure through an improved target detection model; fusing the feature maps through upsampling so as to detect deep features with strong prediction capability; by increasing the prediction scale and the number of anchor frames, the extracted shallow characteristic diagram can be used for prediction, and the accuracy of detecting the traffic sign is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present disclosure, the drawings used in the description of the embodiments of the present disclosure will be briefly described below.
Fig. 1 is a schematic flow chart of a traffic sign identification method according to an embodiment of the present disclosure;
fig. 2 is a schematic flow chart of a size determination method according to an embodiment of the present disclosure;
fig. 3 is a schematic diagram of effective data labeling according to an embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a target detection model provided in an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of a traffic sign recognition apparatus according to an embodiment of the present disclosure;
fig. 6 is a schematic structural diagram of an electronic device according to an embodiment of the present disclosure.
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. Throughout the drawings, the same or similar reference numbers refer to the same or similar elements. It should be understood that the drawings are schematic and that elements and features are not necessarily drawn to scale.
Detailed Description
Embodiments of the present disclosure will be described in more detail below with reference to the accompanying drawings. While certain embodiments of the present disclosure are shown in the drawings, it is to be understood that the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein, but rather are provided for a more thorough and complete understanding of the present disclosure. It should be understood that the drawings and embodiments of the disclosure are for illustration purposes only and are not intended to limit the scope of the disclosure.
It should be understood that the various steps recited in the method embodiments of the present disclosure may be performed in a different order, and/or performed in parallel. Moreover, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this respect.
The term "include" and variations thereof as used herein are open-ended, i.e., "including but not limited to". The term "based on" is "based, at least in part, on". The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions for other terms will be given in the following description.
It should be noted that the terms "first", "second", and the like in the present disclosure are only used for distinguishing the devices, modules or units, and are not used for limiting the devices, modules or units to be different devices, modules or units, and also for limiting the sequence or interdependence relationship of the functions executed by the devices, modules or units.
It is noted that references to "a", "an", and "the" modifications in this disclosure are intended to be illustrative rather than limiting, and that those skilled in the art will recognize that "one or more" may be used unless the context clearly dictates otherwise.
The names of messages or information exchanged between devices in the embodiments of the present disclosure are for illustrative purposes only, and are not intended to limit the scope of the messages or information.
To make the objects, technical solutions and advantages of the present disclosure more apparent, embodiments of the present disclosure will be described in detail with reference to the accompanying drawings.
The traffic sign identification method provided by the embodiment of the disclosure can be applied to an unmanned system and is used for identifying a traffic sign and further carrying out correct road planning, because the pixels of the traffic sign in real life account for about 0.001-5% of a visual field image, the size is small, the occupied pixel points are few, the characteristics are not obvious, so that the detection is difficult compared with a large target, meanwhile, the detection of the traffic sign is also influenced by weather conditions, the traffic sign is difficult to identify due to severe weather conditions, such as fog, dim weather, complex scenes and the like, and the existing deep learning detection algorithm is difficult to effectively and accurately detect and identify the small target traffic sign in a real scene. Therefore, the embodiment of the disclosure enhances the feature extraction capability of the shallow network by adding the residual structure; fusing the feature maps through upsampling so as to detect deep features with strong prediction capability; by increasing the prediction scale and the number of anchor frames, the extracted shallow characteristic diagram can be used for prediction, and the accuracy of detecting the traffic sign is improved. And replacing the coordinate loss improvement loss function with the GIOU, wherein the GIOU can reflect the overlapping condition of the prediction frame and the actual frame and reflect whether the prediction frame and the actual frame are adjacent or far away from each other, and the loss function is optimized to improve the detection accuracy of the traffic sign. The improved k-means clustering algorithm can almost completely eliminate the influence of invalid labeling data on a clustering center, and greatly improves the matching degree between the prior frame and the traffic sign. The method is not only beneficial to reducing the complexity of the training network and shortening the network training time, but also beneficial to improving the detection precision of the YOLOv3 network model. The detection accuracy of the traffic sign is improved by improving the network structure, the loss function and the k-means clustering algorithm.
The present disclosure provides a traffic sign recognition method, apparatus, electronic device and computer storage medium, which aim to solve the above technical problems of the prior art.
The following describes the technical solutions of the present disclosure and how to solve the above technical problems in specific embodiments. The following several specific embodiments may be combined with each other, and details of the same or similar concepts or processes may not be repeated in some embodiments. Embodiments of the present disclosure will be described below with reference to the accompanying drawings.
The embodiment of the present disclosure provides a traffic sign identification method, as shown in fig. 1, the method includes:
step S101, acquiring a preset traffic sign data set, wherein the traffic sign data set comprises a training set and a verification set;
step S102, clustering the traffic signs in the traffic sign data set by adopting a preset clustering algorithm, and determining the sizes of all types of traffic signs;
step S103, acquiring image data to be recognized, recognizing the image data to be recognized by adopting a pre-trained target detection model based on the size, and determining a target traffic sign corresponding to the size.
In the embodiment of the present disclosure, the traffic sign data set refers to image data including a traffic sign, and may be a picture or a video, as an embodiment of the present disclosure, the traffic sign data set may adopt a TT100K data set, and the data set is divided into a training set and a verification set, where the training set may include most data, and the verification set only needs to include a small part of data. After a traffic sign data set is obtained, clustering data in the data set by adopting a preset clustering algorithm to determine the type of a traffic sign in the data set, wherein the distinguishing standard of the type is the size of the traffic sign, classifying the data in the traffic sign data set, determining the size of the traffic sign of each type, identifying image data to be identified by adopting a preset target detection model based on the size, and determining a target traffic sign to be detected, wherein the size of the target traffic sign corresponds to one of the sizes identified by clustering.
According to the method and the device, the influence of invalid labeling data on the clustering center is eliminated through a preset clustering algorithm, the matching degree between the prior frame and the traffic sign is greatly improved, the complexity of a training network is favorably reduced, the network training time is shortened, the detection precision of a model is favorably improved, and the feature extraction capability of a shallow network is enhanced by adding a residual error structure through an improved target detection model; fusing the feature maps through upsampling so as to detect deep features with strong prediction capability; by increasing the prediction scale and the number of anchor frames, the extracted shallow characteristic diagram can be used for prediction, and the accuracy of detecting the traffic sign is improved.
In this implementation, as shown in fig. 2, the clustering traffic signs in the traffic sign data set by using a preset clustering algorithm to determine sizes of various types of traffic signs includes:
step S201, screening effective data in the traffic sign data set;
step S202, dividing the effective data into a preset number of categories, and calculating the target frame intersection and comparison of the effective data of each category and the clustering center of each category;
step S203, reclassifying the effective data with the target frame merging ratio larger than a preset threshold until the center point of the effective data is overlapped with the clustering center of the category of the effective data, determining a plurality of target categories, and determining the size of the traffic sign corresponding to each target category.
In the embodiment of the present disclosure, when clustering is performed on data in a traffic sign data set, effective data in the traffic sign data set needs to be screened out first, because the existence of invalid data may cause an inaccurate clustering center, a generated target size ratio is not matched with a width-to-height ratio of an actual object, and resources are wasted. After eliminating invalid annotation data in the data set, the valid data in the data set is retained, the valid data is firstly divided into a preset number of categories, for example, K categories, a random initial clustering center is selected in each category, then the target intersection ratio IOU of the valid data in each category is calculated, wherein,
Figure BDA0002880488360000081
I=Sg∩Sp,U=Sg∪Sp,Sgis the real frame area, SpIs the predicted box area; and clustering centers of each category, classifying effective data which is intersected with a target frame of the clustering center and is more than a preset threshold value into the category of the clustering center by taking the clustering center of each category as a center, reclassifying all the effective data, calculating the clustering center of each category after reclassification, and if the clustering center of each category after reclassification and the clustering center of each category after reclassification are the same, reclassifying all the effective dataAnd if the clustering center of each category before new classification deviates, continuously reclassifying until the clustering center of each category does not move any more, determining the size of each category, and taking the size as the standard size of the target detection algorithm.
According to the embodiment of the invention, the influence of invalid labeling data on the clustering center can be almost completely eliminated through an improved clustering algorithm, and the matching degree between the prior frame and the traffic sign is greatly improved. The method is not only beneficial to reducing the complexity of the training network and shortening the network training time, but also beneficial to improving the detection precision.
The embodiment of the present disclosure provides a possible implementation manner, in which the screening of the valid data in the traffic sign data set includes:
the screening of valid data in the traffic sign dataset comprises:
acquiring coordinate data of each data in the traffic sign data set;
for one datum, the coordinate of the lower left corner on the X-axis is XminThe coordinate of the lower left corner on the Y-axis is YminThe coordinate of the upper right corner on the X-axis is XmaxThe coordinate of the upper right corner on the Y axis is Ymax
Calculating Dx=Xmax-Xmin,Dy=Ymax-YminIf D isx0 and/or DyIf 0, the data is invalid;
if D isxNot equal to 0 and DyNot equal to 0, then calculate
Figure BDA0002880488360000091
If Q is more than 0.2 and less than 1, the data is valid data.
In the embodiment of the present disclosure, as shown in fig. 3, for one effective data, as an implementation of the present disclosure, the effective data is image data, coordinate data in the image data is extracted, and coordinate values of four corners of a traffic detection flag frame in the image data are taken as X respectivelymin、Xmax、YminAnd YmaxAnd is combined withCalculating Dx=Xmax-Xmin,Dy=Ymax-YminIf D isx0 and/or DyIf 0, the data is invalid; if D isxNot equal to 0 and DyNot equal to 0, then calculate
Figure BDA0002880488360000092
If Q is more than 0.2 and less than 1, the data is valid data.
According to the embodiment of the disclosure, through the identification of the traffic labeling frame in the image data, the data is ensured to only contain valid data, and the influence of invalid data on the embodiment of the disclosure is prevented.
The embodiment of the present disclosure provides a possible implementation manner, in which the pre-trained target detection model is an improved Yolov3 target detection model, and the target detection model is obtained by training in the following manner:
inputting the training set into a first residual error module to obtain a first feature map; wherein the first residual block comprises a residual calculation;
inputting the first feature map into a second residual error module to obtain a second feature map; wherein the second residual block comprises two residual calculations;
inputting the second feature map into a third residual error module to obtain a third feature map, wherein the third residual error block comprises four residual error calculations;
inputting the third feature map into a fourth residual error module to obtain a fourth feature map, wherein the fourth residual error module comprises eight residual error calculations;
inputting the fourth feature map into a fifth residual error module to obtain a fifth feature map, wherein the fifth residual error module comprises eight residual error calculations;
inputting the fifth feature map into a sixth residual error module to obtain a sixth feature map, wherein the sixth residual error module comprises four residual error calculations;
performing up-sampling on the sixth feature map for three times, performing up-sampling on the fifth feature map for two times, performing up-sampling on the fourth feature map for one time, and fusing the fourth feature map and the third feature map to obtain a target feature map;
and identifying the traffic signs in the target characteristic diagram, and calculating the loss degree of the traffic signs in the target characteristic diagram and the traffic signs in the training set through a preset loss function until the loss degree is in a preset range.
In the embodiment of the present disclosure, as shown in fig. 4, a schematic structural diagram of a target detection model provided by the embodiment of the present disclosure is shown, where an original Yolov3 target detection algorithm generates 3 feature maps with different scales, which are respectively used for detecting large, medium and small targets. The characteristic diagram of the shallow network detects small targets, and the characteristic diagram of the deep network detects large targets. However, the feature map of the shallow network has low detection accuracy due to insufficient feature extraction due to a small number of convolution layers. Based on this, the network structure of Yolov3 is deepened by adding residual modules at the shallow network, specifically, 4 residual modules (4 residual modules) are added before 8 residual modules, and the feature extraction capability of the shallow network is enhanced before generating the predicted feature map; adding up-sampling: up-sampling the 16 times of down-sampled features with the step length of 2, and fusing the obtained feature map with the 8 times of down-sampled feature map, so that deep features can be used for detection; meanwhile, a prediction scale is added, namely the number of the prediction scales is improved from 3 to 4, the number of the anchor frames is changed from 9 to 12, and the coverage capability of the anchor frames on small targets is improved.
For the embodiment of the disclosure, each residual calculation is realized by convolution of 1x1 and convolution of 3x3, after passing through six residual modules, feature maps of four scales are obtained, the four feature maps are respectively input into two branches, wherein one branch fuses the four feature maps, and the other branch directly identifies to obtain a target traffic sign.
The embodiment of the disclosure enhances the feature extraction capability of the shallow network by adding the residual structure; fusing the feature maps through upsampling so as to detect deep features with strong prediction capability; by increasing the prediction scale and the number of anchor frames, the extracted shallow characteristic diagram can be used for prediction, and the accuracy of detecting the traffic sign is improved.
According to the method and the device, the influence of invalid labeling data on the clustering center is eliminated through a preset clustering algorithm, the matching degree between the prior frame and the traffic sign is greatly improved, the complexity of a training network is favorably reduced, the network training time is shortened, the detection precision of a model is favorably improved, and the feature extraction capability of a shallow network is enhanced by adding a residual error structure through an improved target detection model; fusing the feature maps through upsampling so as to detect deep features with strong prediction capability; by increasing the prediction scale and the number of anchor frames, the extracted shallow characteristic diagram can be used for prediction, and the accuracy of detecting the traffic sign is improved.
The embodiment of the present disclosure provides a traffic sign recognition apparatus, as shown in fig. 5, the traffic sign recognition apparatus 50 may include: a data set acquisition module 510, a size acquisition module 520, and an identification module 530, wherein,
a data set obtaining module 510, configured to obtain a preset traffic sign data set, where the traffic sign data set includes a training set and a verification set;
a size obtaining module 520, configured to cluster the traffic signs in the traffic sign data set by using a preset clustering algorithm, and determine sizes of the various traffic signs;
the identifying module 530 is configured to obtain image data to be identified, identify the image data to be identified by using a pre-trained target detection model based on the size, and determine a target traffic sign corresponding to the size.
Further, the size obtaining module 520 may be configured to, when the preset clustering algorithm is used to cluster the traffic signs in the traffic sign data set and determine the sizes of the various traffic signs:
screening valid data in the traffic sign data set;
dividing the effective data into a preset number of categories, and calculating target frame intersection of the effective data of each category and comparing the target frame intersection with the clustering center of each category;
and reclassifying the effective data with the target frame intersection ratio larger than a preset threshold until the center point of the effective data is overlapped with the clustering center of the category of the effective data, determining a plurality of target categories, and determining the size of the traffic sign corresponding to each target category.
Further, the size obtaining module 520, when screening the valid data in the traffic sign data set, may be configured to:
acquiring coordinate data of each data in the traffic sign data set;
for one datum, the coordinate of the lower left corner on the X-axis is XminThe coordinate of the lower left corner on the Y-axis is YminThe coordinate of the upper right corner on the X-axis is XmaxThe coordinate of the upper right corner on the Y axis is Ymax
Calculating Dx=Xmax-Xmin,Dy=Ymax-YminIf D isx0 and/or DyIf 0, the data is invalid;
if D isxNot equal to 0 and DyNot equal to 0, then calculate
Figure BDA0002880488360000121
If Q is more than 0.2 and less than 1, the data is valid data.
Further, the pre-trained target detection model is an improved Yolov3 target detection model, and the target detection model is obtained by training in the following way:
inputting the training set into a first residual error module to obtain a first feature map; wherein the first residual block comprises a residual calculation;
inputting the first feature map into a second residual error module to obtain a second feature map; wherein the second residual block comprises two residual calculations;
inputting the second feature map into a third residual error module to obtain a third feature map, wherein the third residual error block comprises four residual error calculations;
inputting the third feature map into a fourth residual error module to obtain a fourth feature map, wherein the fourth residual error module comprises eight residual error calculations;
inputting the fourth feature map into a fifth residual error module to obtain a fifth feature map, wherein the fifth residual error module comprises eight residual error calculations;
inputting the fifth feature map into a sixth residual error module to obtain a sixth feature map, wherein the sixth residual error module comprises four residual error calculations;
performing up-sampling on the sixth feature map for three times, performing up-sampling on the fifth feature map for two times, performing up-sampling on the fourth feature map for one time, and fusing the fourth feature map and the third feature map to obtain a target feature map;
and identifying the traffic signs in the target characteristic diagram, and calculating the loss degree of the traffic signs in the target characteristic diagram and the traffic signs in the training set through a preset loss function until the loss degree is in a preset range.
Further, the preset loss function is:
Loss=LGIOU+ErrorIOU+Errorcls
wherein L isGIOU=1-GIOU,
Figure BDA0002880488360000131
SCTo comprise Sg,SpThe area of the smallest rectangular box inside,
Figure BDA0002880488360000132
I=Sg∩Sp,U=SgUSp,Sgis the real frame area, SpIs the predicted box area; wherein the content of the first and second substances,
Figure BDA0002880488360000133
Figure BDA0002880488360000134
wherein s is2Is the number of grids in the input image,
Figure BDA0002880488360000135
the j-th anchor box representing the i-th grid captures the target, 1 or 0. CiIs the confidence score of the actual box and,
Figure BDA0002880488360000136
is the confidence score of the prediction box.
Further, the traffic sign data set is a TT100K data set, and the method further comprises:
converting the format of the data in the TT100K into a target label format, wherein the target label format can be xml, txt, json and the like
And dividing the data in the target label format into the training set and the verification set.
The traffic sign recognition apparatus according to the embodiment of the present disclosure may perform the traffic sign recognition method according to the above embodiment of the present disclosure, and the implementation principles are similar, and are not described herein again.
According to the method and the device, the influence of invalid labeling data on the clustering center is eliminated through a preset clustering algorithm, the matching degree between the prior frame and the traffic sign is greatly improved, the complexity of a training network is favorably reduced, the network training time is shortened, the detection precision of a model is favorably improved, and the feature extraction capability of a shallow network is enhanced by adding a residual error structure through an improved target detection model; fusing the feature maps through upsampling so as to detect deep features with strong prediction capability; by increasing the prediction scale and the number of anchor frames, the extracted shallow characteristic diagram can be used for prediction, and the accuracy of detecting the traffic sign is improved.
Referring now to FIG. 6, a block diagram of an electronic device 600 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device in the embodiments of the present disclosure may include, but is not limited to, a mobile terminal such as a mobile phone, a notebook computer, a digital broadcast receiver, a PDA (personal digital assistant), a PAD (tablet computer), a PMP (portable multimedia player), a vehicle terminal (e.g., a car navigation terminal), and the like, and a stationary terminal such as a digital TV, a desktop computer, and the like. The electronic device shown in fig. 6 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
The electronic device includes: a memory and a processor, wherein the processor may be referred to as a processing device 601 described below, and the memory may include at least one of a Read Only Memory (ROM)602, a Random Access Memory (RAM)603, and a storage device 608, which are described below:
as shown in fig. 6, electronic device 600 may include a processing means (e.g., central processing unit, graphics processor, etc.) 601 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM)602 or a program loaded from a storage means 608 into a Random Access Memory (RAM) 603. In the RAM 603, various programs and data necessary for the operation of the electronic apparatus 600 are also stored. The processing device 601, the ROM 602, and the RAM 603 are connected to each other via a bus 604. An input/output (I/O) interface 605 is also connected to bus 604.
Generally, the following devices may be connected to the I/O interface 605: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; output devices 607 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 608 including, for example, tape, hard disk, etc.; and a communication device 609. The communication means 609 may allow the electronic device 600 to communicate with other devices wirelessly or by wire to exchange data. While fig. 6 illustrates an electronic device 600 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided.
In particular, according to an embodiment of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated by the flow chart. In such an embodiment, the computer program may be downloaded and installed from a network via the communication means 609, or may be installed from the storage means 608, or may be installed from the ROM 602. The computer program, when executed by the processing device 601, performs the above-described functions defined in the methods of the embodiments of the present disclosure.
It should be noted that the computer readable medium in the present disclosure may be a computer readable signal medium or a computer storage medium or any combination of the two. A computer storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of computer storage media may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, either in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
In some embodiments, the clients, servers may communicate using any currently known or future developed network Protocol, such as HTTP (HyperText Transfer Protocol), and may interconnect with any form or medium of digital data communication (e.g., a communications network). Examples of communication networks include a local area network ("LAN"), a wide area network ("WAN"), the Internet (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network.
The computer readable medium may be embodied in the electronic device; or may exist separately without being assembled into the electronic device.
The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring a preset traffic sign data set, wherein the traffic sign data set comprises a training set and a verification set; clustering the traffic signs in the traffic sign data set by adopting a preset clustering algorithm, and determining the sizes of all types of traffic signs; and acquiring image data to be recognized, recognizing the image data to be recognized by adopting a pre-trained target detection model based on the size, and determining a target traffic sign corresponding to the size.
Computer program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including but not limited to an object oriented programming language such as Java, Smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The modules or units described in the embodiments of the present disclosure may be implemented by software or hardware.
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (ASSPs), systems on a chip (SOCs), Complex Programmable Logic Devices (CPLDs), and the like.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the disclosure herein is not limited to the particular combination of features described above, but also encompasses other embodiments in which any combination of the features described above or their equivalents does not depart from the spirit of the disclosure. For example, the above features and (but not limited to) the features disclosed in this disclosure having similar functions are replaced with each other to form the technical solution.
Further, while operations are depicted in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.

Claims (9)

1. A traffic sign recognition method, comprising:
acquiring a preset traffic sign data set, wherein the traffic sign data set comprises a training set and a verification set;
clustering the traffic signs in the traffic sign data set by adopting a preset clustering algorithm, and determining the sizes of all types of traffic signs;
and acquiring image data to be recognized, recognizing the image data to be recognized by adopting a pre-trained target detection model based on the size, and determining a target traffic sign corresponding to the size.
2. The method of claim 1, wherein the determining the size of each type of traffic sign by clustering the traffic signs in the traffic sign data set using a predetermined clustering algorithm comprises:
screening valid data in the traffic sign data set;
dividing the effective data into a preset number of categories, and calculating target frame intersection of the effective data of each category and comparing the target frame intersection with the clustering center of each category;
and reclassifying the effective data with the target frame intersection ratio larger than a preset threshold until the center point of the effective data is overlapped with the clustering center of the category of the effective data, determining a plurality of target categories, and determining the size of the traffic sign corresponding to each target category.
3. The method of claim 2, wherein the screening the valid data in the traffic sign data set comprises:
acquiring coordinate data of each data in the traffic sign data set;
for one datum, the coordinate of the lower left corner on the X-axis is XminWith the lower left corner lying on the y-axisCoordinate is YminThe coordinate of the upper right corner on the X-axis is XmaxThe coordinate of the upper right corner on the Y axis is Ymax
Calculating Dx=Xmax-Xmin,Dy=Ymax-YminIf D isx0 and/or DyIf 0, the data is invalid;
if D isxNot equal to 0 and DyNot equal to 0, then calculate
Figure FDA0002880488350000011
If 0.2<Q<1, the data is valid data.
4. The traffic sign recognition method according to claim 1, wherein the pre-trained target detection model is an improved Yolov3 target detection model, and the target detection model is trained by:
inputting the training set into a first residual error module to obtain a first feature map; wherein the first residual block comprises a residual calculation;
inputting the first feature map into a second residual error module to obtain a second feature map; wherein the second residual block comprises two residual calculations;
inputting the second feature map into a third residual error module to obtain a third feature map, wherein the third residual error block comprises four residual error calculations;
inputting the third feature map into a fourth residual error module to obtain a fourth feature map, wherein the fourth residual error module comprises eight residual error calculations;
inputting the fourth feature map into a fifth residual error module to obtain a fifth feature map, wherein the fifth residual error module comprises eight residual error calculations;
inputting the fifth feature map into a sixth residual error module to obtain a sixth feature map, wherein the sixth residual error module comprises four residual error calculations;
performing up-sampling on the sixth feature map for three times, performing up-sampling on the fifth feature map for two times, performing up-sampling on the fourth feature map for one time, and fusing the fourth feature map and the third feature map to obtain a target feature map;
and identifying the traffic signs in the target characteristic diagram, and calculating the loss degree of the traffic signs in the target characteristic diagram and the traffic signs in the training set through a preset loss function until the loss degree is in a preset range.
5. The method of claim 4, wherein the predetermined loss function is:
Loss=LGIOU+ErrorIOU+Errorcls
wherein L isGIOU=1-GIOU,
Figure FDA0002880488350000021
SCTo comprise Sg,SpThe area of the smallest rectangular box inside,
Figure FDA0002880488350000022
I=Sg∩Sp,U=Sg∪Sp,Sgis the real frame area, SpIs the predicted box area; wherein the content of the first and second substances,
Figure FDA0002880488350000031
Figure FDA0002880488350000032
wherein s is2Is the number of grids in the input image,
Figure FDA0002880488350000033
whether the jth anchor frame of the ith grid captures the target or not is represented as 1 or 0, and CiIs the confidence score of the actual box and,
Figure FDA0002880488350000034
is the confidence score of the prediction box.
6. The traffic sign recognition method of claim 1, wherein the traffic sign data set is the TT100K data set, the method further comprising:
converting the format of the data in the TT100K into a target label format;
and dividing the data in the target label format into the training set and the verification set.
7. A traffic sign recognition apparatus, comprising:
the system comprises a data set acquisition module, a verification module and a data processing module, wherein the data set acquisition module is used for acquiring a preset traffic sign data set, and the traffic sign data set comprises a training set and a verification set;
the size acquisition module is used for clustering the traffic signs in the traffic sign data set by adopting a preset clustering algorithm and determining the sizes of all types of traffic signs;
and the identification module is used for acquiring image data to be identified, identifying the image data to be identified by adopting a pre-trained target detection model based on the size, and determining a target traffic sign corresponding to the size.
8. An electronic device, comprising:
a processor, a memory, and a bus;
the bus is used for connecting the processor and the memory;
the memory is used for storing operation instructions;
the processor is used for executing the traffic sign identification method of any one of the claims 1-6 by calling the operation instruction.
9. A computer storage medium having stored thereon at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement a method of identifying a traffic sign according to any one of claims 1 to 6.
CN202011632849.2A 2020-12-31 2020-12-31 Traffic sign recognition method and device, electronic equipment and computer storage medium Pending CN112712036A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011632849.2A CN112712036A (en) 2020-12-31 2020-12-31 Traffic sign recognition method and device, electronic equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011632849.2A CN112712036A (en) 2020-12-31 2020-12-31 Traffic sign recognition method and device, electronic equipment and computer storage medium

Publications (1)

Publication Number Publication Date
CN112712036A true CN112712036A (en) 2021-04-27

Family

ID=75547826

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011632849.2A Pending CN112712036A (en) 2020-12-31 2020-12-31 Traffic sign recognition method and device, electronic equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN112712036A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113361643A (en) * 2021-07-02 2021-09-07 人民中科(济南)智能技术有限公司 Deep learning-based universal mark identification method, system, equipment and storage medium
CN113591543A (en) * 2021-06-08 2021-11-02 广西综合交通大数据研究院 Traffic sign recognition method and device, electronic equipment and computer storage medium
CN113780148A (en) * 2021-09-06 2021-12-10 京东鲲鹏(江苏)科技有限公司 Traffic sign image recognition model training method and traffic sign image recognition method

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110018524A (en) * 2019-01-28 2019-07-16 同济大学 A kind of X-ray safety check contraband recognition methods of view-based access control model-attribute
US20190325235A1 (en) * 2018-04-19 2019-10-24 Here Global B.V. Method, apparatus, and system for traffic sign learning
CN110570454A (en) * 2019-07-19 2019-12-13 华瑞新智科技(北京)有限公司 Method and device for detecting foreign matter invasion
GB202000240D0 (en) * 2020-01-08 2020-02-19 Opsydia Ltd Methods and devices for determining a Location Associated with a gemstone
CN111274970A (en) * 2020-01-21 2020-06-12 南京航空航天大学 Traffic sign detection method based on improved YOLO v3 algorithm
WO2020140371A1 (en) * 2019-01-04 2020-07-09 平安科技(深圳)有限公司 Deep learning-based vehicle damage identification method and related device
CN111950583A (en) * 2020-06-05 2020-11-17 杭州电子科技大学 Multi-scale traffic signal sign identification method based on GMM clustering
CN112016467A (en) * 2020-08-28 2020-12-01 展讯通信(上海)有限公司 Traffic sign recognition model training method, recognition method, system, device and medium
CN112132032A (en) * 2020-09-23 2020-12-25 平安国际智慧城市科技股份有限公司 Traffic sign detection method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190325235A1 (en) * 2018-04-19 2019-10-24 Here Global B.V. Method, apparatus, and system for traffic sign learning
WO2020140371A1 (en) * 2019-01-04 2020-07-09 平安科技(深圳)有限公司 Deep learning-based vehicle damage identification method and related device
CN110018524A (en) * 2019-01-28 2019-07-16 同济大学 A kind of X-ray safety check contraband recognition methods of view-based access control model-attribute
CN110570454A (en) * 2019-07-19 2019-12-13 华瑞新智科技(北京)有限公司 Method and device for detecting foreign matter invasion
GB202000240D0 (en) * 2020-01-08 2020-02-19 Opsydia Ltd Methods and devices for determining a Location Associated with a gemstone
CN111274970A (en) * 2020-01-21 2020-06-12 南京航空航天大学 Traffic sign detection method based on improved YOLO v3 algorithm
CN111950583A (en) * 2020-06-05 2020-11-17 杭州电子科技大学 Multi-scale traffic signal sign identification method based on GMM clustering
CN112016467A (en) * 2020-08-28 2020-12-01 展讯通信(上海)有限公司 Traffic sign recognition model training method, recognition method, system, device and medium
CN112132032A (en) * 2020-09-23 2020-12-25 平安国际智慧城市科技股份有限公司 Traffic sign detection method and device, electronic equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
何凯华;: "基于目标检测网络的交通标志识别", 软件工程, no. 10 *
邓天民;周臻浩;方芳;王琳;: "改进YOLOv3的交通标志检测方法研究", 计算机工程与应用, no. 20 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113591543A (en) * 2021-06-08 2021-11-02 广西综合交通大数据研究院 Traffic sign recognition method and device, electronic equipment and computer storage medium
CN113591543B (en) * 2021-06-08 2024-03-26 广西综合交通大数据研究院 Traffic sign recognition method, device, electronic equipment and computer storage medium
CN113361643A (en) * 2021-07-02 2021-09-07 人民中科(济南)智能技术有限公司 Deep learning-based universal mark identification method, system, equipment and storage medium
CN113780148A (en) * 2021-09-06 2021-12-10 京东鲲鹏(江苏)科技有限公司 Traffic sign image recognition model training method and traffic sign image recognition method

Similar Documents

Publication Publication Date Title
CN108304835B (en) character detection method and device
CN112712036A (en) Traffic sign recognition method and device, electronic equipment and computer storage medium
CN111310770B (en) Target detection method and device
CN110852258A (en) Object detection method, device, equipment and storage medium
CN114993328B (en) Vehicle positioning evaluation method, device, equipment and computer readable medium
CN111738316B (en) Zero sample learning image classification method and device and electronic equipment
CN111209856B (en) Invoice information identification method and device, electronic equipment and storage medium
CN111291715B (en) Vehicle type identification method based on multi-scale convolutional neural network, electronic device and storage medium
CN110287817B (en) Target recognition and target recognition model training method and device and electronic equipment
CN115578570A (en) Image processing method, device, readable medium and electronic equipment
CN113033715B (en) Target detection model training method and target vehicle detection information generation method
CN115546769B (en) Road image recognition method, device, equipment and computer readable medium
CN116012814A (en) Signal lamp identification method, signal lamp identification device, electronic equipment and computer readable storage medium
CN115375657A (en) Method for training polyp detection model, detection method, device, medium, and apparatus
CN115937864A (en) Text overlap detection method, device, medium and electronic equipment
CN116704593A (en) Predictive model training method, apparatus, electronic device, and computer-readable medium
CN113033682B (en) Video classification method, device, readable medium and electronic equipment
CN111612714B (en) Image restoration method and device and electronic equipment
CN114429631A (en) Three-dimensional object detection method, device, equipment and storage medium
CN112528970A (en) Guideboard detection method, device, equipment and computer readable medium
CN113205092A (en) Text detection method, device, equipment and storage medium
CN111383337A (en) Method and device for identifying objects
CN111626283B (en) Character extraction method and device and electronic equipment
CN113963322B (en) Detection model training method and device and electronic equipment
CN113542800B (en) Video picture scaling method, device and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination