CN112699900A - Improved traffic sign identification method of YOLOv4 - Google Patents

Improved traffic sign identification method of YOLOv4 Download PDF

Info

Publication number
CN112699900A
CN112699900A CN202110005171.3A CN202110005171A CN112699900A CN 112699900 A CN112699900 A CN 112699900A CN 202110005171 A CN202110005171 A CN 202110005171A CN 112699900 A CN112699900 A CN 112699900A
Authority
CN
China
Prior art keywords
feature
data set
traffic sign
myolov4
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110005171.3A
Other languages
Chinese (zh)
Inventor
郭继峰
孙文博
马志强
白淼源
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Northeast Forestry University
Original Assignee
Northeast Forestry University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Northeast Forestry University filed Critical Northeast Forestry University
Priority to CN202110005171.3A priority Critical patent/CN112699900A/en
Publication of CN112699900A publication Critical patent/CN112699900A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to an improved traffic sign identification method of YOLOv 4. The data set is expanded by adding random Gaussian noise, CutMix data enhancement, mosaic data enhancement and other data preprocessing methods to the experimental data set, and then the traffic sign and the confidence thereof are identified by utilizing an improved YOLOv4 model. The improved YOLOv4 model is characterized in that feature extraction is firstly carried out on a feature extraction network introducing deep separable convolution, then obtained feature maps of different scales are input into a bidirectional feature pyramid network structure to carry out multi-scale feature fusion, feature information of different scales is fused, the same scale information is enhanced, and finally prediction and regression are carried out on feature maps of different sizes to obtain a final recognition result. The improved YOLOv4 model solves the problem of unbalanced number of positive and negative samples in the identification process by using a Focal loss function. Experiments show that the improved YOLOv4 model has small parameter and calculation amount, short reasoning time and can quickly and accurately identify traffic signs in different environments.

Description

Improved traffic sign identification method of YOLOv4
The technical field is as follows:
the invention relates to the field of target detection, in particular to an improved traffic sign identification method of YOLOv 4.
Background art:
the early traffic sign recognition method usually uses color difference and Histogram of Oriented Gradient (HOG) in an RGB color model to detect shape change for recognition, but only uses color or shape as the basis for traffic sign recognition, which is often influenced by light and environment to make erroneous judgment, and the detection speed is too slow and the accuracy is low in the process of traffic sign recognition. With the rapid development of deep learning methods, the deep learning methods are widely applied to the field of traffic safety, and the performance of the deep learning methods is superior to that of the traditional methods.
The existing traffic sign identification method based on deep learning can be roughly divided into two types, the first type is identification by using a two-stage network, the method selects an interest area by depending on an area recommendation network, carries out preliminary prediction on the interest area, then detects signs existing in the interest area by using a classification and regression network, and further adjusts the result of the preliminary prediction. The two-stage network has large network parameter quantity and calculated quantity, and an interest area is found out by depending on an area candidate network, so that the overall reasoning and calculating time of the network is too long, and although the detection precision of the two-stage network is higher, the identification speed is slower and does not meet the real-time requirement of traffic sign identification. Aiming at the problem that the reasoning speed of the two-stage network is slow, researchers provide a second type of method, the method utilizes a first-stage network to identify traffic signs, and is different from a second-stage network to extract candidate areas, the first-stage network utilizes a plurality of preset grid units to directly classify and regress targets in each grid unit, the reasoning calculation time of the network is obviously reduced, the traffic signs can be identified more quickly, although the reasoning time of the first-stage network is shorter, compared with the two-stage network which only detects interest areas, the first-stage network needs to detect a large number of grid units, and the detection accuracy of the first-stage network is slightly lower than that of the second-stage network.
The improved traffic sign recognition method of YOLOv4 provided by the invention introduces deep separable convolution in the feature extraction network to reduce parameter redundancy in the model, fuses feature information of different scales by using a bidirectional feature pyramid structure, and solves the problem that positive and negative samples in a data set are unbalanced by using Focal loss. The MYOLOv4 model provided by the invention has the advantages of less calculated quantity and parameter quantity, shorter reasoning time and capability of quickly and accurately obtaining a traffic sign recognition result.
The invention content is as follows:
the invention aims to overcome the defects of the existing method and provides an improved traffic sign identification method of YOLOv4 to solve the problems of overlarge quantity and calculated amount, and low identification precision and identification speed of the traffic sign identification method.
A traffic sign identification method for improving YOLOv4 is characterized by comprising the following steps:
step 1: adding Gaussian noise, CutMix data enhancement and mosaic data enhancement to an original data set;
step 2: inputting the data set subjected to data enhancement as a MYOLOv4(Miniature YOLOv4, Mini YOLOv4) training set into a feature extraction network introducing deep separable convolution for feature extraction to obtain feature information of a plurality of different scales;
and step 3: inputting different-scale feature information into a bidirectional feature pyramid network structure to fuse different-scale information and more same-scale information;
and 4, step 4: and identifying and boundary box regression on information with different scales, performing gradient back propagation to update the weight by utilizing a Focal loss function and a CIoU (Complete Intersection over Union) loss function, reducing the loss in the identification and regression process, and inputting a test set sample into a trained MYOLOv4 model to obtain the type and the confidence coefficient of the traffic sign.
The implementation of step 1 comprises:
step 1.1: randomly adding Gaussian noise from the samples in the traffic sign training set, and adding the samples added with the Gaussian noise into the training set;
step 1.2: randomly cutting a rectangular area by using an existing sample in the data set, and filling the cut area by using the area corresponding to other samples to obtain a new data set sample;
step 1.3: selecting four images in the data set, carrying out processing such as random turning, color gamut transformation and scaling, sequentially placing the images at four corners, stacking the images to generate a new image added data set with the same size as the original image, using the image added data set obtained in the step as a data set sample for MYOLOv4 training, and improving the generalization performance of the model.
The implementation of step 2 comprises:
step 2.1: inputting the data set sample into a feature extraction network of MYOLOv4 for feature extraction, and performing down-sampling to obtain feature maps with different sizes;
step 2.2: and storing the characteristic graphs of the last three layers with different scales obtained by characteristic extraction as the input of the bidirectional characteristic pyramid network structure.
The implementation of step 3 comprises:
step 3.1: inputting the last three layers of feature information into a bidirectional feature pyramid network structure for multi-scale feature fusion, and enhancing the same scale information by using jump connection to obtain three new different scales of feature information.
The implementation of the step 4 comprises the following steps:
step 4.1: and identifying and performing bounding box regression on three different scales of feature information output by the bidirectional feature pyramid network structure.
Step 4.2: the MYOLOv4 model carries out gradient back propagation, Focal loss and CIoU loss are optimized, and the weight of the MYOLOv4 model is adjusted until the loss function converges.
Step 4.3: and inputting the image to be recognized into a trained MYOLOv4 model, classifying the image by the model, regressing the image by the model with a boundary box, and outputting a traffic sign boundary box, a sign name and a recognition confidence coefficient to finish the recognition of the traffic sign.
Description of the drawings:
fig. 1 is a flow chart of an improved YOLOv4 traffic sign recognition method.
Fig. 2 is a model framework diagram of an improved YOLOv4 traffic sign recognition method.
Fig. 3 is a sample of a data set obtained for data enhancement.
Fig. 4 is an MCSP residual block diagram.
Fig. 5 is a traffic sign recognition effect diagram.
The specific implementation mode is as follows:
the technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It is to be understood that the described embodiments are merely exemplary of the invention, and not restrictive of the full scope of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Fig. 1 shows a detailed flow diagram of the implementation of the present invention, fig. 2 shows an overall framework diagram of the present invention, and the traffic sign recognition method for improving YOLOv4 includes the following steps:
step 1: and performing data enhancement on the basis of the original data set, wherein the data enhancement respectively comprises the addition of random Gaussian noise, CutMix data enhancement and mosaic data enhancement, and the samples after the data enhancement are put into the original data set to enhance the generalization performance of the model.
Fig. 3 shows a sample of a data set obtained after data enhancement. CutMix data enhancement randomly cuts a rectangular area of one image in the data set, and fills areas corresponding to other images into the cut image to obtain a new data set sample. The mosaic data enhancement carries out processing such as random turning, color gamut conversion, scaling and the like on four images in the data set, and the four processed images are sequentially placed at four corners to obtain a new image mixed with the four images with the same size as the original image.
Step 2: and taking the data set subjected to data enhancement as a MYOLOv4 experimental data set, and dividing the training set and the testing set according to a 9:1 ratio. Inputting a training set sample into a feature extraction network introducing deep separable convolution for feature extraction to obtain a plurality of feature information of different scales;
fig. 2 shows an overall framework diagram of MYOLOv4, the feature extraction network of MYOLOv4 introduces a depth separable convolution instead of the traditional convolution, consisting of two depth separable convolution blocks and three MCSP residual blocks, for a total of five sub-blocks. Each submodule carries out down-sampling on the Feature map once and obtains a new Feature map, a BiFPN (Bidirectional Feature Pyramid Network) structure is adopted in a multi-scale Feature fusion part, so that the Network fuses more features with the same scale without adding extra calculation parameters, the features with different scales can be fused more fully, the contribution of a node to the Feature Network is reduced for only one input node, and finally the different Feature maps after Feature fusion are input into a detection and regression Network YOLO Head to complete the identification of the traffic sign.
Fig. 4 shows an MCSP residual block diagram. The MCSP residual block consists of depth separable convolution, batch normalization and a Leaky ReLU activation function, the problems of gradient disappearance and gradient explosion are solved while the number of network layers is increased by adopting a residual connection mode, the learning capacity of a convolutional neural network is enhanced, the Leaky ReLU activation function can enable the convergence speed of a model to be higher when the gradient is reduced, and the image surface characteristic information can be more fully extracted by the deep MCSP structural block.
And selecting the last three layers of feature maps obtained by the feature extraction network, wherein the sizes of the three layers of feature maps correspond to the sizes of 1/8,1/16 and 1/32 of the input images respectively, and taking the three layers of feature information as the input of the bidirectional feature pyramid network structure.
And step 3: and inputting the different scales of feature information obtained by the feature extraction network into the bidirectional feature pyramid network structure for multi-scale feature fusion.
Inputting the three feature maps with different scales into a bidirectional feature pyramid network structure for feature fusion, accumulating the feature information with different scales according to different weights, enhancing the feature information with the same scale by using jump connection, finally obtaining perfect feature information, improving the sensitivity of the model to targets with different sizes, and reducing the omission ratio of traffic signs.
And 4, step 4: identifying and boundary box regression on information with different scales, reversely propagating and updating weights by utilizing a Focal loss function and a CIoU loss function, reducing the loss in the identification and regression processes, inputting a test set sample into a trained MYOLOv4 model to obtain the type and the confidence coefficient of a traffic sign, and showing the identification effect of the traffic sign in FIG. 5;
and identifying and performing boundary frame regression on the improved feature information, selecting a detection frame with the highest confidence coefficient as a prediction result according to a non-maximum inhibition principle, detecting marks with different sizes by using three different scale features, and comparing the model prediction result with the real label to calculate the total loss. The regression loss of the MYOLOv4 boundary box designed by the invention adopts a CIoU loss function, and the CIoU loss is defined as:
Figure BDA0002883020740000051
wherein: IoU denotes the intersection ratio of the prediction box and the real bounding box of the object, bgtRespectively representing the center point of the prediction frame and the center of the real target frame, rho2(b,bgt) And C represents the length of a diagonal line of a minimum rectangle enclosed by the prediction frame and the real target frame. Alpha is a weighting function, v is used to measure the aspect ratio of the bounding box,
Figure BDA0002883020740000061
the CIoU loss function can be converged faster when the intersection ratio is 0, and a better regression effect can be obtained by adjusting the prediction frame by using the central point of the boundary frame, the length-width ratio and other numerical values.
In order to solve the problem of imbalance of positive and negative samples in the identification process, MYOLOv4 selects a Focal loss function to replace a dichotomous cross entropy loss function in the classification loss and the confidence coefficient loss, wherein the Focal loss is defined as:
FL(pt)=-αt(1-pt)γlog(pt)
wherein:
Figure BDA0002883020740000062
αtrepresenting the weights of the positive and negative samples, and gamma representing the weights of the difficult and easy samples. The Focal loss is improved on the basis of cross entropy loss, the weight occupied by simple background samples is reduced by the Focal loss, so that the model is concentrated on detection of foreground objects, and the phenomenon that the model tends to be in a background class with a large quantity in prediction is avoided.
MYOLOv4 performs back propagation, optimizes the Focal loss and the CIoU loss, and adjusts the weight of the model until the loss function converges. And selecting the optimized weight file as a prediction weight file of MYOLOv 4. Inputting an image to be recognized into a MYOLOv4 model, recognizing the image by MYOLOv4, selecting a prediction frame with highest confidence level from a plurality of prediction frames of the same sign as a model prediction result according to a non-maximum suppression principle, suppressing other prediction frames which intersect with the prediction frame and are larger than a threshold value, outputting a traffic sign boundary frame, a sign name and a prediction confidence level in the image, and completing recognition of the traffic sign.
It should be understood that parts of the specification not set forth in detail are well within the prior art.
While the invention has been described with reference to specific embodiments and procedures, it will be understood by those skilled in the art that the invention is not limited thereto, and that various changes and substitutions may be made without departing from the spirit of the invention. The scope of the invention is only limited by the appended claims.
The embodiments of the invention described herein are exemplary only and should not be taken as limiting the invention, which is described by reference to the accompanying drawings.

Claims (4)

1. A traffic sign identification method for improving YOLOv4 is characterized by comprising the following steps:
step 1: randomly adding Gaussian noise, CutMix data enhancement and mosaic data enhancement to the original data set;
step 2: inputting the data set subjected to data amplification as a MYOLOv4(Miniature YOLOv4, Mini YOLOv4) training set into a feature extraction network introduced with deep separable convolution for feature extraction to obtain feature information of a plurality of different scales;
and step 3: inputting different-scale feature information into a bidirectional feature pyramid network structure to fuse different-scale information and more same-scale information;
and 4, step 4: and identifying and boundary box regression on information with different scales, performing gradient back propagation to update the weight by utilizing a Focal loss function and a CIoU (complete intersection over Unit) loss function, reducing the loss in the identification and regression process, and inputting the image to be identified into a trained MYOLOv4 model to obtain the type of the traffic sign and the confidence thereof.
2. The improved YOLOv4 traffic sign recognition method of claim 1, wherein step 1 comprises the steps of:
step 1.1: randomly adding Gaussian noise from the samples in the traffic sign data set, and adding the samples added with the Gaussian noise into the training set;
step 1.2: randomly cutting a rectangular area by using an existing sample in the data set, and filling the cut area by using the corresponding area of other samples to obtain a new data set sample;
step 1.3: selecting four images in the data set, carrying out processing such as random turning, color gamut transformation and scaling, sequentially placing the images at four corners, stacking the images to generate a new image added data set with the same size as the original image, using the image added data set obtained in the step as a data set sample for MYOLOv4 training, and improving the generalization performance of the model.
3. The improved YOLOv4 traffic sign recognition method of claim 1, wherein step 2 comprises the steps of:
step 2.1: inputting the training samples into a feature extraction network of MYOLOv4 for feature extraction, and performing downsampling to obtain feature maps of different sizes;
step 2.2: and storing the characteristic graphs of the last three layers with different scales obtained by characteristic extraction as the input of the bidirectional characteristic pyramid network structure.
4. The improved YOLOv4 traffic sign recognition method of claim 1, wherein step 4 comprises the steps of:
step 4.1: and identifying and performing bounding box regression on three different scales of feature information output by the bidirectional feature pyramid network structure.
Step 4.2: the MYOLOv4 model carries out back propagation, Focal loss and CIoU loss are optimized, and the weight of the MYOLOv4 model is adjusted until the loss function converges.
Step 4.3: and inputting the image to be recognized into a trained MYOLOv4 model, and performing recognition and bounding box regression on the model to output a traffic sign bounding box, a sign name and recognition confidence.
CN202110005171.3A 2021-01-05 2021-01-05 Improved traffic sign identification method of YOLOv4 Pending CN112699900A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110005171.3A CN112699900A (en) 2021-01-05 2021-01-05 Improved traffic sign identification method of YOLOv4

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110005171.3A CN112699900A (en) 2021-01-05 2021-01-05 Improved traffic sign identification method of YOLOv4

Publications (1)

Publication Number Publication Date
CN112699900A true CN112699900A (en) 2021-04-23

Family

ID=75514605

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110005171.3A Pending CN112699900A (en) 2021-01-05 2021-01-05 Improved traffic sign identification method of YOLOv4

Country Status (1)

Country Link
CN (1) CN112699900A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269161A (en) * 2021-07-16 2021-08-17 四川九通智路科技有限公司 Traffic signboard detection method based on deep learning
CN113297915A (en) * 2021-04-28 2021-08-24 江苏师范大学 Insulator recognition target detection method based on unmanned aerial vehicle inspection
CN113516069A (en) * 2021-07-08 2021-10-19 北京华创智芯科技有限公司 Road mark real-time detection method and device based on size robustness
CN114463772A (en) * 2022-01-13 2022-05-10 苏州大学 Deep learning-based traffic sign detection and identification method and system
CN114587416A (en) * 2022-03-10 2022-06-07 山东大学齐鲁医院 Gastrointestinal tract submucosal tumor diagnosis system based on deep learning multi-target detection
CN114863189A (en) * 2022-07-06 2022-08-05 青岛场外市场清算中心有限公司 Intelligent image identification method based on big data
CN115019243A (en) * 2022-04-21 2022-09-06 山东大学 Monitoring floater lightweight target detection method and system based on improved YOLOv3
CN115035119A (en) * 2022-08-12 2022-09-09 山东省计算中心(国家超级计算济南中心) Glass bottle bottom flaw image detection and removal device, system and method

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2597614A1 (en) * 2011-11-28 2013-05-29 Clarion Co., Ltd. Automotive camera system and its calibration method and calibration program
CN111191608A (en) * 2019-12-30 2020-05-22 浙江工业大学 Improved traffic sign detection and identification method based on YOLOv3
CN111209907A (en) * 2019-12-20 2020-05-29 广西柳州联耕科技有限公司 Artificial intelligent identification method for product characteristic image in complex light pollution environment
CN111274970A (en) * 2020-01-21 2020-06-12 南京航空航天大学 Traffic sign detection method based on improved YOLO v3 algorithm
CN111914715A (en) * 2020-07-24 2020-11-10 廊坊和易生活网络科技股份有限公司 Intelligent vehicle target real-time detection and positioning method based on bionic vision
CN112132001A (en) * 2020-09-18 2020-12-25 深圳大学 Automatic tracking and quality control method for iPSC and terminal equipment
CN112163602A (en) * 2020-09-14 2021-01-01 湖北工业大学 Target detection method based on deep neural network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2597614A1 (en) * 2011-11-28 2013-05-29 Clarion Co., Ltd. Automotive camera system and its calibration method and calibration program
CN111209907A (en) * 2019-12-20 2020-05-29 广西柳州联耕科技有限公司 Artificial intelligent identification method for product characteristic image in complex light pollution environment
CN111191608A (en) * 2019-12-30 2020-05-22 浙江工业大学 Improved traffic sign detection and identification method based on YOLOv3
CN111274970A (en) * 2020-01-21 2020-06-12 南京航空航天大学 Traffic sign detection method based on improved YOLO v3 algorithm
CN111914715A (en) * 2020-07-24 2020-11-10 廊坊和易生活网络科技股份有限公司 Intelligent vehicle target real-time detection and positioning method based on bionic vision
CN112163602A (en) * 2020-09-14 2021-01-01 湖北工业大学 Target detection method based on deep neural network
CN112132001A (en) * 2020-09-18 2020-12-25 深圳大学 Automatic tracking and quality control method for iPSC and terminal equipment

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113297915A (en) * 2021-04-28 2021-08-24 江苏师范大学 Insulator recognition target detection method based on unmanned aerial vehicle inspection
CN113516069A (en) * 2021-07-08 2021-10-19 北京华创智芯科技有限公司 Road mark real-time detection method and device based on size robustness
CN113269161A (en) * 2021-07-16 2021-08-17 四川九通智路科技有限公司 Traffic signboard detection method based on deep learning
CN114463772A (en) * 2022-01-13 2022-05-10 苏州大学 Deep learning-based traffic sign detection and identification method and system
CN114463772B (en) * 2022-01-13 2022-11-25 苏州大学 Deep learning-based traffic sign detection and identification method and system
CN114587416A (en) * 2022-03-10 2022-06-07 山东大学齐鲁医院 Gastrointestinal tract submucosal tumor diagnosis system based on deep learning multi-target detection
CN115019243A (en) * 2022-04-21 2022-09-06 山东大学 Monitoring floater lightweight target detection method and system based on improved YOLOv3
CN114863189A (en) * 2022-07-06 2022-08-05 青岛场外市场清算中心有限公司 Intelligent image identification method based on big data
CN114863189B (en) * 2022-07-06 2022-09-02 青岛场外市场清算中心有限公司 Intelligent image identification method based on big data
CN115035119A (en) * 2022-08-12 2022-09-09 山东省计算中心(国家超级计算济南中心) Glass bottle bottom flaw image detection and removal device, system and method

Similar Documents

Publication Publication Date Title
CN112699900A (en) Improved traffic sign identification method of YOLOv4
CN107609525B (en) Remote sensing image target detection method for constructing convolutional neural network based on pruning strategy
CN108647585B (en) Traffic identifier detection method based on multi-scale circulation attention network
CN108564097B (en) Multi-scale target detection method based on deep convolutional neural network
WO2020221298A1 (en) Text detection model training method and apparatus, text region determination method and apparatus, and text content determination method and apparatus
CN111126202B (en) Optical remote sensing image target detection method based on void feature pyramid network
CN111680706B (en) Dual-channel output contour detection method based on coding and decoding structure
CN108960135B (en) Dense ship target accurate detection method based on high-resolution remote sensing image
CN111260614A (en) Convolutional neural network cloth flaw detection method based on extreme learning machine
CN111898432B (en) Pedestrian detection system and method based on improved YOLOv3 algorithm
CN111523553A (en) Central point network multi-target detection method based on similarity matrix
WO2022227770A1 (en) Method for training target object detection model, target object detection method, and device
CN114612472B (en) SegNet improvement-based leather defect segmentation network algorithm
CN113487600B (en) Feature enhancement scale self-adaptive perception ship detection method
CN109871792B (en) Pedestrian detection method and device
CN113469050A (en) Flame detection method based on image subdivision classification
CN114565048A (en) Three-stage pest image identification method based on adaptive feature fusion pyramid network
CN113159215A (en) Small target detection and identification method based on fast Rcnn
CN112733942A (en) Variable-scale target detection method based on multi-stage feature adaptive fusion
CN112131933A (en) Rapid pedestrian detection method and system based on improved YOLO network
CN113298817A (en) High-accuracy semantic segmentation method for remote sensing image
CN116912796A (en) Novel dynamic cascade YOLOv 8-based automatic driving target identification method and device
CN113487610B (en) Herpes image recognition method and device, computer equipment and storage medium
CN112102241B (en) Single-stage remote sensing image target detection algorithm
CN113673585B (en) Pavement damage detection method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20210423

WD01 Invention patent application deemed withdrawn after publication