CN115147380A - Small transparent plastic product defect detection method based on YOLOv5 - Google Patents

Small transparent plastic product defect detection method based on YOLOv5 Download PDF

Info

Publication number
CN115147380A
CN115147380A CN202210804667.1A CN202210804667A CN115147380A CN 115147380 A CN115147380 A CN 115147380A CN 202210804667 A CN202210804667 A CN 202210804667A CN 115147380 A CN115147380 A CN 115147380A
Authority
CN
China
Prior art keywords
yolov5
prediction
plastic product
network
transparent plastic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210804667.1A
Other languages
Chinese (zh)
Inventor
赵文轩
王凌
王凡通
林俊言
高雁凤
陈锡爱
王斌锐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
China Jiliang University
Original Assignee
China Jiliang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by China Jiliang University filed Critical China Jiliang University
Priority to CN202210804667.1A priority Critical patent/CN115147380A/en
Publication of CN115147380A publication Critical patent/CN115147380A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/762Arrangements for image or video recognition or understanding using pattern recognition or machine learning using clustering, e.g. of similar faces in social networks
    • G06V10/763Non-hierarchical techniques, e.g. based on statistics of modelling distributions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Abstract

The invention relates to a defect detection method for a small transparent plastic product based on a YOLOv5 model, and belongs to the field of deep learning and target detection. Firstly, acquiring a defect image of a plastic product to construct a related data set; secondly, performing K-means + + clustering on the marking frame in the training set; then, carrying out four-scale feature extraction on the image by the Backbone network added with the attention mechanism and transmitting the image into a Neck structure; then the Neck structure performs feature fusion and transmits the feature fusion to a prediction network to obtain a final detection result; and finally, verifying and testing the trained network model. The method can realize real-time and accurate detection of the defects of the small transparent plastic products on the basis of improving the performance of defect feature extraction and the accuracy of identification.

Description

Small transparent plastic product defect detection method based on YOLOv5
Technical Field
The invention belongs to the technical field of computer vision, and relates to a method for detecting defects of a small transparent plastic product under a complex background.
Background
With the continuous development of software and hardware technologies, the field of target detection is also continuously improved. The defect detection is an important direction in the field of target detection, and with the increasing importance of enterprises on product quality and the increasing requirements of customers on product quality, the defect detection gradually becomes a subject with important research value and significance in the field of computer vision. Due to the needs of industrial field scenes, accurate and real-time defect detection is brought forward. Many small-size transparent plastic products, such as syringe, plastic ruler etc. often appear defect such as crackle, defect, bubble in production, assembly process, mainly rely on people's eye to look for and reject, have work efficiency low, working strength scheduling problem greatly.
Current research in the field of target detection is divided into two branches: a two-stage detection method and a single-stage detection method. The two-stage detection method comprises R-CNN, fast R-CNN, faster R-CNN and the like, and the series of algorithms are generally divided into two steps: the first step is to extract a target area, namely a suggestion frame, from an image; the second step is to detect and classify each suggestion box. Its advantages are high precision, low detection speed and no requirement for industrial field. The single-stage detection method comprises SSD, YOLO series and the like, which do not need to go through two steps and obtain results in a one-step in-place mode. The detection speed and detection accuracy of YOLO v5 are greatly improved compared with that of Faster R-CNN.
Aiming at the special environment of an industrial production line, a target detection algorithm based on YOLO v5 faces the conditions of low accuracy of dense target detection and high missing rate of small targets; therefore, on the premise of keeping the original detection speed and precision, the improved YOLO v5 network structure has important significance for the field of detection of defects in plastic production by improving the detection precision of small targets and dense targets.
Disclosure of Invention
The invention aims to solve the problems that when YOLO v5 is applied to the detection of a plastic product production image in the industrial field, the detection difficulty is high due to the fact that a detection target is a small target, and the real-time performance is insufficient due to the fact that a backbone network is complex, and provides a plastic product defect detection method based on YOLO v 5.
The technical scheme provided by the invention is as follows:
a plastic product defect detection method based on YOLOv5 comprises the following steps:
(1) Acquiring defect images of the plastic product, wherein all the images comprise three types of cracks, defects and bubbles, marking the defects, and dividing all the images into a training set, a verification set and a test set according to a proportion; preprocessing the training set, and clustering the prior frames of the training set by using a K-means + + algorithm;
(2) Inputting the training set image obtained in the step (1) into a backbone network structure of improved YOLOv5 to obtain 4 feature maps with different scales;
(3) Inputting the 4 image feature layers with different scales acquired in the step (2) into a Neck structure of an improved YOLOv5 network, and outputting tensor data with 4 different scales through a multi-layer feature fusion structure, thereby realizing information fusion with different scales;
(4) Inputting the 4 tensor data with different scales obtained in the step (3) into a Prediction structure of an improved YOLOv5 network, and optimizing a target anchor frame by using a loss function defined in 4 detection heads and a K-means + + clustering algorithm, so that the detection precision is improved;
(5) Training the network model through a training set, verifying the model on a verification set, and finally testing the test set to evaluate the performance of the test set; and the plastic product defect real-time detection is carried out on the plastic product defect real-time detection device carried on a computer image processing module.
Further, the step (1) specifically comprises:
and (3) disordering all the acquired images and dividing the images into a training set, a verification set and a test set according to a proportion. And performing data enhancement on the image, wherein the method comprises rotation, scaling, color gamut transformation and image splicing, so that the background complexity of the defect to be detected is enriched. And marking the defect images in the training set by using Labelimg marking software, wherein the marking types are three types of Crack, rock and Bubble.
The original prior box is not matched with the data set, so that a K-means + + clustering algorithm with better clustering effect is selected: first, a central point u is randomly selected from the data set X 1 Calculating the rest of the sample points x in the sample i Euclidean distance d from the center of the current cluster i When the next central point is selected, if the distance between the central point and all the other central points is larger, the probability of the central point is higher, otherwise, the probability is lower when the distance is closer; repeating the above operations until finding the required central point; then calculating the distance between each sample point and the two central points and classifying the sample points into the nearest cluster; finally, the position of the central point is updated to ensure that all the points in each cluster have the distance d from the central point i At a minimum, the loop is repeated until the center position is unchanged or the maximum number of iterations is reached, and the probability that a certain sample point is selected as the center point is defined as follows:
Figure BDA0003736383050000031
wherein d is i And j is the Euclidean distance from the other sample points to the center of the current cluster, j is the number of the current central points, and 12 is the number of the central points to be clustered.
Further, the step (2) specifically comprises:
constructing an improved Backbone network model part of YOLOv5, wherein the improved Backbone network model part comprises the steps of changing a slicing layer of an original Focus module into a convolution layer, sequentially connecting a C3 module, a Conv module, an SPP module and an ECA attention module, placing the ECA module in the last layer of a backhaul structure, and performing multi-scale feature extraction on an image;
the slice layer of the Focus module is changed into a convolution layer, namely, the convolution is used for replacing the slice to extract the characteristics, so that the network computing speed is increased;
the ECA attention module is an improvement on send, using global average pooling, but without reducing the channel dimension; capturing interaction information locally across channels by considering each channel and k neighbor elements of the channel; where k represents the kernel size of the one-dimensional fast convolution, and k is defined as follows:
Figure BDA0003736383050000032
wherein C is the total number of channels.
Further, the step (3) specifically comprises:
inputting the 4 image feature layers with different scales acquired in the step (2) into a Neck structure of an improved YOLOv5 network, wherein the Neck structure consists of a convolutional layer, a down-sampling module, a feature fusion module and an up-sampling module, and information fusion with different scales is realized through FPN and PAN;
when the input image is (W, W, 3), taking four feature layers output by four C3 structures in the Backbone structure as the input of the multilayer feature fusion network; the extracted 4 different scale feature layers are respectively
Figure BDA0003736383050000033
And
Figure BDA0003736383050000034
Figure BDA0003736383050000035
and are respectively marked as TZ1, TZ2, TZ3 and TZ4 in sequence;
the multilayer feature fusion network comprises a top-down FPA structure and a bottom-up PAN structure, when fusion is carried out in the FPA structure, one of T2, T3 and T4 is fused with a feature map with a corresponding scale in CSP structure output through up-sampling, and TZ1' is used as the input of the PAN fusion structure;
when fusion is carried out in the PAN structure, the TZ1' and the output of the FPA are continuously fused after down sampling, and finally tensor data of four different scales are output.
Further, the step (4) specifically includes:
inputting the 4 tensor data with different scales obtained in the step (3) into a Prediction structure of an improved YOLOv5 network, predicting through 4 detection heads, and positioning and detecting the type of a detection target through a defined loss function;
for head-to-input detection
Figure BDA0003736383050000041
And
Figure BDA0003736383050000042
performing prediction on the feature map, matching the grids where the target center point is located and two grids closest to the center point, generating 3 boundary frames with different sizes by each grid, and determining a final prediction frame and a prediction type through a boundary frame target loss function; the loss function consists of regression loss, confidence loss and classification loss, wherein the definition formula of the regression loss function is changed as follows:
Figure BDA0003736383050000043
where d is the center C of the real box gt And the center C of the prediction frame, l is the diagonal length of the minimum bounding rectangle of the real frame and the prediction frame, IOU is the intersection ratio of the areas of the real frame and the prediction frame, the calculation formula of the IOU is shown as (2), v is a parameter for measuring the consistency of the length-width ratio of the real frame and the prediction frame, the definition formula is shown as (3),
Figure BDA0003736383050000044
Figure BDA0003736383050000045
wherein w gt ,h gt W and h are the width and height of the real frame and the width and height of the predicted frame respectively; according to the loss function definitional formula, more appropriate prediction boxes can be generated to improve the prediction performance of the network.
The invention has the beneficial effects that:
an ECA attention mechanism module is added into the backbone network, so that the feature extraction capability of the backbone network is effectively improved, and the Focus structure is replaced by Conv to improve the calculation speed. The feature fusion of the fourth scale is expanded in the feature fusion network, the shallow features of the image are extracted, meanwhile, the defects are detected through the four detection heads, and the accuracy of extracting the defects of the plastic products is improved. And the regression loss function is changed into C _ loss, so that the prediction performance of the network on the defects of the plastic products can be improved, and the omission ratio of dense targets and small targets is reduced.
Drawings
Fig. 1 is a flow chart of a plastic product defect detection method based on YOLOv5 according to the present invention.
Fig. 2 is a network structure diagram of a plastic product defect detection method based on YOLOv5 according to the present invention.
Fig. 3 is a block diagram of an ECA module employed in the present invention.
Fig. 4 is a structure diagram of a feature fusion network after improvement is added.
Detailed Description
The present invention is further described in detail below with reference to the attached drawings so that those skilled in the art can implement the invention by referring to the description text.
As shown in FIG. 1, the invention provides a plastic product defect detection method based on YOLOv5, which comprises the steps of firstly establishing a vehicle data set, then redesigning the size of a prior frame according to the size of a vehicle, then improving a YOLOv5 model, changing the slicing operation in Focus into convolution operation, adding an ECA module, using four features with different scales to pass through a multi-layer feature fusion structure, and transmitting the last four layers of feature graphs into a prediction structure for prediction. And finally, training and verifying the network on a training set, storing the model and testing on a testing set. The method has good effect on detecting the defects of the small target plastic, and the detection speed can still achieve real-time detection.
The method comprises the following steps:
acquiring defect images of plastic products, dividing all the images into a training set, a verification set and a test set, wherein all the images comprise three types of cracks, defects and bubbles, and marking the defects in the training set; clustering the prior frames of the training set by using a K-means + + algorithm;
the pictures are taken according to the collected plastic products, and each picture covers more than two defect types because the defects are different and two defect mixed conditions exist. And expanding the number of samples into 4731 pictures by randomly turning all the pictures vertically and horizontally and randomly changing the brightness, and proportionally dividing the pictures into a training set, a verification set and a test set.
And carrying out data annotation on pictures in the training set by using Labelimg, wherein the annotation categories are crack, lack and bubble. And clustering the prior frames of the training set by using a K-means + + algorithm to obtain 12 clustering center points.
And (2) in the backbone network structure, changing the sliced layers in the Focus into convolution layers, sequentially connecting the convolution layers, and adding an ECA attention mechanism so as to extract four features with different scales from the image. When the input image is 640 × 640 and the number of channels is 3, the obtained 4-layer feature maps are (160,160,128), (80,80,256), (40,40,512) and (20,20,1024) and are sequentially denoted as TZ1, TZ2, TZ3 and TZ4, respectively.
The ECA attention module is to capture the interaction information locally across channels considering each channel and k neighbors of the channel, where k represents the kernel size of one-dimensional fast convolution, which is defined as follows:
Figure BDA0003736383050000061
and (3) inputting the 4 image feature layers with different scales acquired in the step (2) into a Neck structure of an improved YOLOv5 network, wherein the Neck structure consists of a convolutional layer, a down-sampling module, a feature fusion module (Concat) and an up-sampling module, information fusion of different scales is realized through FPN and PAN, and finally tensor data of 4 different scales are output.
And (4) inputting the 4 tensor data with different scales obtained in the step (3) into a Prediction structure of the improved YOLOv5 network, predicting through 4 detection heads, and positioning and detecting the type of a detection target through a defined loss function. The loss function is composed of three parts of regression loss, confidence loss and classification loss, the confidence loss and the classification loss are calculated by binary cross entropy loss, and C _ loss is selected to calculate the regression loss, and the definition formula of the C _ loss is as follows:
Figure BDA0003736383050000062
where d is the center C of the real box gt And the center C of the prediction frame, l is the length of the diagonal line of the minimum bounding rectangle of the real frame and the prediction frame, IOU is the intersection ratio of the areas of the real frame and the prediction frame, v is a parameter for measuring the consistency of the length-width ratios of the real frame and the prediction frame, the definition formulas of the two are shown as follows,
Figure BDA0003736383050000063
Figure BDA0003736383050000064
wherein w gt ,h gt W and h are the width and height of the real frame and the width and height of the predicted frame respectively; according to the loss function definitional formula, more appropriate prediction boxes can be generated to improve the prediction performance of the network.
Step (5), the Ubuntu18.04 operating system is used, the Pythrch frame is used for model training, and the specific experimental environment is shown in Table 1.
Table 1 experimental environment configuration
Figure BDA0003736383050000071
In the experiment, 2098 pictures are used in a defect data set of the injector, the defects comprise three types of defects, namely cracks, defects and bubbles, 1889 pictures are used in a training set in the training, 420 pictures are used in a verification set, and 209 pictures are used in a test set.
In the training process, the input image size is 640 × 640 × 3. The initial learning rate was set to 0.001 for a total of 200 training cycles with a batch sample count of 4.
TABLE 2 comparison of original and improved model Performance
Figure BDA0003736383050000072
The result shows that the parameter quantity in the improved YOLOv5 model is increased by 143384 compared with that in the unmodified model, the detection speed can still keep the requirement of real-time detection, and the detection accuracy is improved by 2.3 percent compared with that in the original model. It can be seen that the improved model performs better on the injector defect data set.
While the invention has been described with reference to specific embodiments, it will be understood by those skilled in the art that various changes and modifications may be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (5)

1. A defect detection method for a small transparent plastic product based on YOLO v5 is characterized by comprising the following steps
(1) Acquiring defect images of the small transparent plastic product, dividing all the images into a training set, a verification set and a test set, wherein all the images comprise three types of cracks, defects and bubbles and mark the defects in the training set; clustering the prior frames of the training set by using a K-means + + algorithm;
(2) Inputting the training set image obtained in the step (1) into a backbone network structure of improved YOLOv5 to obtain 4 feature maps with different scales;
(3) Inputting the 4 image feature layers with different scales acquired in the step (2) into a Neck structure of an improved YOLOv5 network, and outputting tensor data with 4 different scales through a multi-layer feature fusion structure, thereby realizing information fusion with different scales;
(4) Inputting the 4 tensor data with different scales obtained in the step (3) into a Prediction structure of an improved YOLOv5 network, predicting through 4 detection heads, and positioning and detecting the type of a detection target through a defined loss function;
(5) The network model is trained through a training set, the model is verified on a verification set, and the network model is carried to a computer image processing module to carry out real-time defect detection on the small transparent plastic product.
2. The defect detection method for the YOLOv 5-based small transparent plastic product as claimed in claim 1, wherein the K-means + + clustering algorithm in step (1) is modified based on the K-means algorithm:
first, a central point u is randomly selected from the data set X 1 Calculating the rest of the sample points x in the sample i The Euclidean distance d between the current central point and the current central point i When the next central point is selected, if the distance between the central point and all the other central points is larger, the probability of the central point is higher, otherwise, the probability is lower when the distance is closer; repeating the above operations until finding the required central point; then calculating the distance between each sample point and the two central points and classifying the sample points into the nearest cluster; finally, the center point position is updated to ensure that all the points in each cluster have the distance d from the center point i The minimum value is that the loop is circulated to the central position and is not changed or the maximum iteration number is reached; the probability that a certain sample point is selected as the center point is defined as follows:
Figure FDA0003736383040000011
wherein j is the number of the current central points, and 12 is the number of the required clustering central points.
3. The method for detecting defects of a small transparent plastic product based on YOLOv5 as claimed in claim 1, wherein the backbone network structure of YOLOv5 in the step (2) comprises: constructing a Backbone network model part of YOLOv5, wherein the Backbone network model part comprises a convolution layer which is formed by changing a slicing layer of an original Focus module, and is sequentially connected with a C3 module, a Conv module, an SPP module and an ECA attention module, and the ECA module is arranged at the last layer of a backhaul structure;
the slice layer of the Focus module is changed into a convolution layer, namely, the convolution is used for replacing the slice to extract the characteristics, so that the network computing speed is increased;
the ECA attention module is improved on SEnet, and global average pooling is used, but the channel dimension is not reduced; capturing interaction information locally across channels by considering each channel and k neighbor elements of the channel; where k represents the kernel size of the one-dimensional fast convolution, which is defined as follows:
Figure FDA0003736383040000021
wherein C is the total number of channels.
4. The YOLOv 5-based small transparent plastic product defect detection method as claimed in claim 1, wherein the 4 different scales in the step (3) are specifically: when the input image is (W, W, 3), taking four feature layers output by four C3 structures in the Backbone structure as the input of the multilayer feature fusion network; 4 different scale feature layers are respectively
Figure FDA0003736383040000022
And
Figure FDA0003736383040000023
Figure FDA0003736383040000024
and are respectively marked as TZ1, TZ2, TZ3 and TZ4 in sequence; the multi-layer feature fusion network comprises a top-down FPA structure and a bottom-up PAN structure, when fusion is carried out in the FPA structure, one of T2, T3 and T4 is fused with a feature map with a corresponding scale in CSP structure output through up-sampling, and TZ1' is used as the input of the PAN fusion structure; when fusion is carried out in the PAN structure, the TZ1' and the FPA output result after down sampling are continuously fused, and finally tensor data of four different scales are output.
5. The method for detecting defects of a small transparent plastic product based on YOLOv5 as claimed in claim 1, wherein the 4 detection heads in the step (4) are predicted specifically as follows: for head-to-input detection
Figure FDA0003736383040000025
And
Figure FDA0003736383040000026
performing prediction on the feature map, matching the grids where the target center point is located and two grids closest to the center point, generating 3 boundary frames with different sizes by each grid, and determining a final prediction frame and a prediction type through a boundary frame target loss function; the loss function is composed of regression loss, confidence loss and classification loss, wherein the definition formula of the regression loss function is changed as follows:
Figure FDA0003736383040000031
where d is the center C of the real box gt And the center C of the prediction frame, l is the diagonal length of the minimum bounding rectangle of the real frame and the prediction frame, IOU is the intersection ratio of the areas of the real frame and the prediction frame, the calculation formula of the IOU is shown as (2), v is a parameter for measuring the consistency of the length-width ratio of the real frame and the prediction frame, the definition formula is shown as (3),
Figure FDA0003736383040000032
Figure FDA0003736383040000033
wherein w gt ,h gt W and h are the width and height of the real frame and the width and height of the predicted frame respectively; according to the loss function definitional formula, more appropriate prediction boxes can be generated to improve the prediction performance of the network.
CN202210804667.1A 2022-07-08 2022-07-08 Small transparent plastic product defect detection method based on YOLOv5 Pending CN115147380A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210804667.1A CN115147380A (en) 2022-07-08 2022-07-08 Small transparent plastic product defect detection method based on YOLOv5

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210804667.1A CN115147380A (en) 2022-07-08 2022-07-08 Small transparent plastic product defect detection method based on YOLOv5

Publications (1)

Publication Number Publication Date
CN115147380A true CN115147380A (en) 2022-10-04

Family

ID=83411290

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210804667.1A Pending CN115147380A (en) 2022-07-08 2022-07-08 Small transparent plastic product defect detection method based on YOLOv5

Country Status (1)

Country Link
CN (1) CN115147380A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503344A (en) * 2023-04-21 2023-07-28 南京邮电大学 Crack instance segmentation method based on deep learning
CN116523902A (en) * 2023-06-21 2023-08-01 湖南盛鼎科技发展有限责任公司 Electronic powder coating uniformity detection method and device based on improved YOLOV5
CN116959099A (en) * 2023-06-20 2023-10-27 河北华网计算机技术有限公司 Abnormal behavior identification method based on space-time diagram convolutional neural network

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116503344A (en) * 2023-04-21 2023-07-28 南京邮电大学 Crack instance segmentation method based on deep learning
CN116959099A (en) * 2023-06-20 2023-10-27 河北华网计算机技术有限公司 Abnormal behavior identification method based on space-time diagram convolutional neural network
CN116523902A (en) * 2023-06-21 2023-08-01 湖南盛鼎科技发展有限责任公司 Electronic powder coating uniformity detection method and device based on improved YOLOV5
CN116523902B (en) * 2023-06-21 2023-09-26 湖南盛鼎科技发展有限责任公司 Electronic powder coating uniformity detection method and device based on improved YOLOV5

Similar Documents

Publication Publication Date Title
CN106960195B (en) Crowd counting method and device based on deep learning
CN106096561B (en) Infrared pedestrian detection method based on image block deep learning features
CN115147380A (en) Small transparent plastic product defect detection method based on YOLOv5
CN111079674B (en) Target detection method based on global and local information fusion
CN102496001B (en) Method of video monitor object automatic detection and system thereof
CN108596038B (en) Method for identifying red blood cells in excrement by combining morphological segmentation and neural network
CN114220035A (en) Rapid pest detection method based on improved YOLO V4
CN111027377B (en) Double-flow neural network time sequence action positioning method
CN111462120A (en) Defect detection method, device, medium and equipment based on semantic segmentation model
CN114495029B (en) Traffic target detection method and system based on improved YOLOv4
CN104036284A (en) Adaboost algorithm based multi-scale pedestrian detection method
CN112991269A (en) Identification and classification method for lung CT image
CN111275010A (en) Pedestrian re-identification method based on computer vision
CN108734200B (en) Human target visual detection method and device based on BING (building information network) features
CN113239753A (en) Improved traffic sign detection and identification method based on YOLOv4
CN111914902A (en) Traditional Chinese medicine identification and surface defect detection method based on deep neural network
CN113393438A (en) Resin lens defect detection method based on convolutional neural network
CN111161213B (en) Industrial product defect image classification method based on knowledge graph
CN113221956A (en) Target identification method and device based on improved multi-scale depth model
CN115240119A (en) Pedestrian small target detection method in video monitoring based on deep learning
CN110516527B (en) Visual SLAM loop detection improvement method based on instance segmentation
CN114694130A (en) Method and device for detecting telegraph poles and pole numbers along railway based on deep learning
CN113313149B (en) Dish identification method based on attention mechanism and metric learning
CN114332942A (en) Night infrared pedestrian detection method and system based on improved YOLOv3
CN113496260A (en) Grain depot worker non-standard operation detection method based on improved YOLOv3 algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination