CN113850799A - YOLOv 5-based trace DNA extraction workstation workpiece detection method - Google Patents
YOLOv 5-based trace DNA extraction workstation workpiece detection method Download PDFInfo
- Publication number
- CN113850799A CN113850799A CN202111195733.1A CN202111195733A CN113850799A CN 113850799 A CN113850799 A CN 113850799A CN 202111195733 A CN202111195733 A CN 202111195733A CN 113850799 A CN113850799 A CN 113850799A
- Authority
- CN
- China
- Prior art keywords
- workpiece
- training
- yolov5
- workstation
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 38
- 238000007400 DNA extraction Methods 0.000 title claims abstract description 16
- 238000012549 training Methods 0.000 claims abstract description 46
- 238000000034 method Methods 0.000 claims abstract description 32
- 238000012360 testing method Methods 0.000 claims abstract description 10
- 238000012795 verification Methods 0.000 claims abstract description 10
- 238000012545 processing Methods 0.000 claims abstract description 7
- 238000005286 illumination Methods 0.000 claims abstract description 5
- 238000000605 extraction Methods 0.000 claims description 13
- 230000006870 function Effects 0.000 claims description 12
- 230000008569 process Effects 0.000 claims description 12
- 230000004927 fusion Effects 0.000 claims description 10
- 238000011156 evaluation Methods 0.000 claims description 9
- 238000005457 optimization Methods 0.000 claims description 5
- 238000011160 research Methods 0.000 claims description 5
- 238000000137 annealing Methods 0.000 claims description 4
- 238000004364 calculation method Methods 0.000 claims description 4
- 230000007547 defect Effects 0.000 claims description 4
- 238000007689 inspection Methods 0.000 claims description 4
- 238000013480 data collection Methods 0.000 claims description 3
- 239000011159 matrix material Substances 0.000 claims description 3
- 230000001629 suppression Effects 0.000 claims description 3
- 101100441251 Arabidopsis thaliana CSP2 gene Proteins 0.000 claims description 2
- 238000010276 construction Methods 0.000 claims description 2
- 230000009466 transformation Effects 0.000 claims description 2
- 238000010200 validation analysis Methods 0.000 claims description 2
- 230000003044 adaptive effect Effects 0.000 claims 1
- 238000005516 engineering process Methods 0.000 description 8
- 238000013527 convolutional neural network Methods 0.000 description 6
- 238000002372 labelling Methods 0.000 description 5
- 238000013135 deep learning Methods 0.000 description 4
- 230000004913 activation Effects 0.000 description 2
- 239000012620 biological material Substances 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 230000002779 inactivation Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000011176 pooling Methods 0.000 description 2
- 230000004931 aggregating effect Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000003708 edge detection Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000013095 identification testing Methods 0.000 description 1
- 230000005764 inhibitory process Effects 0.000 description 1
- 239000007788 liquid Substances 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 238000011897 real-time detection Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000012800 visualization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Quality & Reliability (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a method for detecting a workpiece of a micro DNA extraction workstation based on YOLOv5, and belongs to the technical field of target detection. The data set is formed by marking the pictures of the workpieces in the acquisition workstation, and format processing and division are carried out on the data set, wherein the environments of the workpieces in the pictures comprise conditions of illumination, angles, distances, shielding and the like in different time periods. Inputting a training set, wherein the initial anchor value needs to be recalculated, training YOLOv5 to obtain a weight model, outputting position information of an overlapped target regressed by using DIOU-NMS, selecting an optimal model by using a feedback result of a verification set, and testing the workpiece detection model of the trace DNA extraction workstation by using the test set. The invention provides a method for detecting a workpiece of a trace DNA extraction workstation in different environments, which can detect consumables required in the workstation and solve the problems of low detection efficiency and poor robustness in the prior art.
Description
Technical Field
The invention belongs to the field of target detection, particularly relates to a detection system based on image processing and deep learning, and particularly relates to a workpiece detection method of a micro DNA extraction workstation based on YOLOv 5.
Background
With the continuous progress of artificial intelligence, the legal medical expert needs to carry out operations such as batch liquid transfer, sample adding, mixing and the like on a large number of biological materials to be examined in a short time, and the automatic extraction workstation of trace DNA is produced. Because the biological material collection is difficult and the sample is easy to be polluted, the invention must ensure that the quantity, the type and the position of the workpieces in the workstation cannot be mistaken in the experimental process. In the stage of the technology not developed yet, the detection work needs to be completed manually, and the detection speed is low, the reliability is low, and the automation degree is low. Therefore, how to automatically identify and position the workpiece on the micro DNA extraction workstation has important research significance.
Machine vision has been developed rapidly in recent years, and workpiece identification and positioning have been widely applied in industrial automation scenes. The traditional workpiece detection algorithm is to identify an interested object and locate a target through five steps of preprocessing, sliding a window, extracting features, classifying the target and post-processing. The feature extraction needs to be capable of truly reflecting the content in the image, so that the feature extraction is always the key point and the difficulty in the detection system, and the algorithm comprises an LBP operator, a Canny operator, a Hough transform and the like. The technology 1 is a local binary feature algorithm for extracting local texture features in the picture, and has the advantages of rotation invariance, gray scale invariance and the like. The technique 2 is a multi-polar edge detection algorithm for processing a single-channel gray image, and can reduce the data scale of the image while keeping the original image attribute. The technique 3 is one of basic algorithms for recognizing geometric shapes in an image through global features, and has the advantages of no influence of figure rotation and small influence of curve discontinuity. The algorithm technology is suitable for training and detection in a specific environment, and the artificial calculation features are complex, large in workload and prone to interference. Aiming at the problem of feature extraction in images, deep learning is used as a emerging branch of computer vision, and features are extracted by multiplying and adding a plurality of convolution templates and corresponding positions of an original feature map, so that a convolution neural network is obtained. Compared with the characteristics artificially learned by the traditional method, the characteristics extracted by the convolutional neural network have diversity and complexity, and the classification precision of the full-connection layer is higher under the condition that a plurality of targets appear. At present, scholars at home and abroad put forward a plurality of convolutional neural network models and research convolutional layers and pooling layers, such as Alexnet, VGG, RCNN, YOLO, SSD and the like, and the convolutional neural network models are quickly applied to various fields of image recognition, target tracking and the like. The technology 4 increases the depth of the classical convolutional neural network model and adds a ReLU activation function and random inactivation, the training speed is increased by the activation function, overfitting is prevented by the random inactivation, and the parameter quantity of the model is reduced. In the technology 5, the number of network layers is deep, the pooling kernel is small, the parameter quantity generated by the large convolution kernel is reduced by using the small convolution kernel, and the improvement of the accuracy rate shows that the error rate can be reduced by increasing the network depth to a certain extent. The technology 6 is a classic detection algorithm based on a candidate region, a candidate frame is intercepted by adopting a selective search aiming at a target detection algorithm, a convolutional neural network extracts features of the candidate frame, then the feature information is sent into a linear SVM classifier and a regression model, a highly coincident candidate frame is obtained through non-maximum suppression, and the application of the candidate frame and the convolutional neural network is a milestone leap of a target detection problem. The technology 7 directly carries out end-to-end regression on the position and the category of the target frame, has extremely high reasoning speed and good detection precision, has good generalization when carrying out global reasoning, and really realizes real-time detection. The technology 8 adopts the traditional image pyramid idea to extract feature maps with different scales for convolution operation, belongs to an end-to-end one-stage algorithm, and solves the problem that small targets are weak or cannot be detected to a certain extent. With the development of target detection, a YOLO series method already provides a YOLOv5 model, a new Pythrch frame is introduced, the model occupies less memory and is easy to transplant, and the model mainly falls on applications such as model engineering and the like, and meanwhile, a foundation is laid for identification and positioning of workpieces in intelligent manufacturing, production and operation environments.
Through the above analysis, the problems and defects of the prior art are as follows: the traditional target detection algorithm has high requirements on environment, low anti-interference capability, insufficient feature extraction in artificial feature calculation and the like.
The difficulty in solving the above problems and defects is: the device has strong adaptability to variable environments generated when the machine operates in an actual workstation to perform workpiece detection, and realizes quick and accurate detection.
The significance of solving the problems and the defects is as follows: the requirement of automation of equipment of the workstation cannot be met by means of manual detection, a traditional target detection algorithm is suitable for recognition in a certain single environment, but the environment of the workstation is constantly changed in real time during actual work. Therefore, the requirement of automation and intelligence of the workstation equipment can be better met by researching an algorithm with stronger applicability.
Disclosure of Invention
Aiming at the problems in the prior art, the invention provides an automatic feature extraction, automatic feature fusion and automatic identification workpiece online detection system, which solves the problems in the prior art that unsafe behavior modes are detected manually and feature extraction in the traditional algorithm is inaccurate and is easily interfered by the environment.
The invention is realized in such a way that the method for detecting the workpiece of the micro DNA extraction workstation based on the YOLOv5 comprises the following steps:
acquiring workpiece picture data based on a workstation live-action;
marking the data acquired by the live-action scene, and performing format processing and division on the marked data;
and thirdly, based on a YOLOv5 network algorithm, performing iterative training by using a pre-trained weight model, and continuously adjusting weight parameters of the model by using training set and verification set data.
And fourthly, storing the weight file after the YOLOv5 network algorithm is trained, judging through a verification set evaluation index of the comparison model, and selecting an optimal model according to a judgment result to identify and detect the workpiece.
Preferably, in step one, a self-constructed data set is acquired using a Da Heng Galaxy series MER-500-14U3C color camera to capture a photograph of a sample of the workpiece. In order to ensure sample diversity, pictures of different conditions such as illumination, angles, distances and shielding are collected in the data collection stage, and the total number of the workpiece images is 1200.
Preferably, in the second step, the data set adopts a PASAL VOC format, a labelImg tool is used to label the target object in the picture, and the label file includes the matrix coordinate parameters of the real area. The marked file takes xml as a suffix, the file name is consistent with the picture name, and the file name is as follows: the 2: 2 ratio divides the data set into a training set, a validation set, and a test set.
Preferably, in step three, a weight model pre-trained on the coco and voc data sets is used in the training set training process, training data and weight parameters are saved in each training, and the training result is visualized by wandb tracking the training process. The parameter training defaults to an SGD optimization algorithm, and the hyper-parameter setting is as follows: BatchSize is 8, epoch of training is 300, momentum factor is 0.937, weight attenuation coefficient is 0.0005, initial learning rate is 0.01, learning rate is dynamically adjusted by adopting cosine annealing strategy, and Loss function is GIoU Loss.
The loss function is calculated as LGIoU1-GIoU, wherein,c is the minimum closed rectangle area of the prediction box and the group box, A is the intersection area of the prediction box and the group box, and B is the union area of the prediction box and the group box.
Preferably, in step four, in order to measure the performance of the model, the algorithm is evaluated by using evaluation criteria commonly used in target detection, wherein three common evaluation indexes are used, namely average Precision mAP, accuracy Precision and Recall rate Recall. And selecting a trained YOLOv5 optimal model to input into a test set for workpiece identification and test by verifying the performance of a set feedback model, wherein best.pt in YOLOv5x.pt of YOLOv5 is selected as the model weight.
The average accuracy mAP formula is:where Q is the total number of classes and AP is the area under the Precision-Recall curve.
The average Precision formula is:where TP is the number of positive samples predicted to be positive and FP is the negative of positiveThe number of samples.
The Recall ratio Recall formula is:where FN is the number of positive samples predicted to be negative.
The YOLOv5 network is a modified YOLOv5 network, and the modified YOLOv5 network comprises the following steps:
the method comprises the following steps: the traditional non-maximum suppression NMS in YOLOv5 was modified using DIOU-NMS. Repairing when identifying overlapping targets, a conventional NMS may ignore the overlapping target condition, and the DIOU may regress the location information of the overlapping target bounding box center point.
Step two: when the Yolov5 network is trained, an initial anchor frame anchors needs to be calculated, a K-means mean algorithm is added to recalculate the initial anchor frame aiming at the research target, and the result obtained by calculating the automatic anchor frame function of the network is replaced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required to be used in the embodiments of the present application will be briefly described below.
FIG. 1 is a diagram of the steps of the workpiece inspection in the automatic extraction workstation for trace DNA according to the embodiment of the present invention
FIG. 2 is a flow chart of the workpiece inspection of the automatic extraction workstation for trace DNA according to the embodiment of the present invention
FIG. 3 is an example of the automatic extraction workstation equipment for micro DNA provided by the embodiment of the invention
FIG. 4 is a YOLOv5 network structure model provided by the embodiment of the present invention
FIG. 5 is an image of a workpiece marked by an algorithm under different illumination according to an embodiment of the present invention
FIG. 6 is an image of a workpiece marked by an algorithm identified at different viewing angles according to an embodiment of the present invention
FIG. 7 is an image of a workpiece marked by an algorithm under different occlusions according to an embodiment of the present invention
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Aiming at the problems in the prior art, the invention provides a method for detecting a workpiece of a micro DNA extraction workstation based on YOLOv5, and the method is further described in detail with reference to the accompanying drawings and the specific implementation process.
As shown in fig. 1, a diagram showing the steps of detecting the workpiece in the automatic extraction workstation based on the deep learning algorithm is shown, wherein the steps are explained in detail as follows:
s101: acquiring workpiece picture data based on a workstation live-action;
consider the interference of a steady light source with a workstation and the external natural light, where a certain micro DNA automated extraction workstation is shown in fig. 3. A workpiece picture acquisition device is designed, and a workpiece sample photo is acquired by adopting a Da Heng Galaxy series MER-500-14U3C color camera to carry out self-construction data set. In order to ensure sample diversity, pictures of conditions such as illumination, angles, distances and shielding at different time periods are collected in a data collection stage, and 1200 workpiece images are obtained in total.
S102: marking the data collected by the live-action scene, and carrying out format processing and division on the marked data;
the data set adopts a PASAL VOC labeling format, so that related labeling work and enhancement operation are facilitated, a labelImg labeling tool is used for manually labeling a target object in a workpiece picture, and a labeling file comprises matrix coordinate parameters of a real target. The marked data is stored in a label file in an xml format, the xml label file is converted into a txt file required by YOLOv5, and the marked original data set is divided by a leave-out method through programming and is respectively a 60% training set, a 20% verification set and a 20% testing set.
S103: based on a YOLOv5 network algorithm, a pre-trained weight model is used for iterative training, and the weight parameters of the model are continuously adjusted by using training set and verification set data.
For homemade data sets using the YOLOv5 modelThe training set is trained using weight models pre-trained on the coco and voc datasets. And inputting verification set data by using a weight file best.pt obtained by training, and judging whether the model training is overfitting according to the change relation of the loss value along with the epoch, wherein the judgment is shown in a workpiece detection flow chart of fig. 2. And continuously iteratively training by continuously adjusting the model parameters and the hyper-parameters. And tracking the training process and visualizing the training result through the wandb, wherein the wandb is an online model training visualization tool and can automatically record the hyper-parameters and the output indexes in the model training process. The parameter training defaults to using the SGD optimization algorithm, and the hyper-parameter setting is as follows: BatchSize is 8, epoch of training is 300, momentum factor is 0.937, weight attenuation coefficient is 0.0005, initial learning rate is 0.01, learning rate is dynamically adjusted by adopting cosine annealing strategy, and Loss function is GIoU Loss. The loss function is calculated as LGIoU1-GIoU, wherein,c is the minimum closed rectangle area of the prediction box and the group box, A is the intersection area of the prediction box and the group box, and B is the union area of the prediction box and the group box.
S104: and after the Yolov5 network algorithm is trained, saving the weight file, judging through a verification set evaluation index of a comparison model, and selecting an optimal model according to a judgment result to identify and detect the workpiece.
In order to measure the performance of the model, an evaluation standard commonly used in target detection is adopted to evaluate an algorithm, wherein three common evaluation indexes are used, namely average Precision mAP, accuracy Precision and Recall rate Recall. And selecting an optimal training model according to the feedback result of the verification set, and performing identification test on the workstation workpiece test set by using the optimal model. Best.pt in YOLOv5x.pt of YOLOv5 is selected as the model weight. As shown in fig. 5, 6, and 7, the detection of the unknown sample input to the model to obtain the label information of the bounding box is shown.
Wherein the mAP formula is:where Q is the total number of classes and AP is the area under the Precision-Recall curve.
Precision formula is:where TP is the number of positive samples predicted to be positive and FP is the number of negative samples predicted to be positive.
The deep learning algorithm YOLOv5 in S103 and S104 is a target detection model, as shown in fig. 4. During training, data enhancement is firstly carried out, pictures are spliced in a random zooming, cutting and arranging mode, the pictures are subjected to self-adaptive zooming after several pictures or several rounds of training, finally, a K-means mean algorithm is added before network operation to recalculate an initial anchor frame aiming at a research target, and the result obtained by automatic anchor frame function calculation of the network is replaced. Entering the backhaul phase, a Focus downsampling structure is adopted, which comprises 4 slicing operations and 1 convolution operation of 32 convolution kernels. The method changes the size of the original image into half of the original size, changes the channel into 4 times of the original size, and can ensure the information integrity of the image although the size is reduced. Meanwhile, a CSPNet local cross-layer fusion structure is adopted, a richer feature map is obtained by utilizing the optimization process of the network, and meanwhile, the gradient transformation process is concentrated in the feature map, so that the calculated amount is reduced to a certain extent. And a Neck layer is inserted into the network before the network outputs a prediction result, so that better feature fusion is ensured, and the CSP2 module is adopted to strengthen the feature fusion. And transmitting the high-level feature information and the output features of different-level CSP modules in an up-sampling mode from top to bottom by adopting an FPN network, fusing PAN networks and aggregating the shallow features by a bottom-up feature pyramid. The SPP module is adopted to carry out channel splicing on feature graphs of different scales obtained after maxporoling operation, context features are obviously separated, the acceptance range of main features is more effectively increased, parameter fusion of different detection layers is carried out on different main layers, the output prediction result is more accurate, the traditional non-maximum value inhibition NMS is modified when YOLOv5 is output, the DIOU-NMS is used for repairing the condition that the traditional NMS ignores overlapped targets when the overlapped targets are identified, and the DIOU algorithm can regress the position information of the central point of a boundary frame of the overlapped targets.
In the description of the present invention, "a plurality" means two or more unless otherwise specified; the terms "upper", "lower", "left", "right", "inner", "outer", "front", "rear", "head", "tail", and the like, indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, are only for convenience in describing and simplifying the description, and do not indicate or imply that the device or element referred to must have a particular orientation, be constructed in a particular orientation, and be operated, and thus, should not be construed as limiting the invention. Furthermore, the terms "first," "second," "third," and the like are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
The above description is only for the purpose of illustrating the present invention and the appended claims are not to be construed as limiting the scope of the invention, which is intended to cover all modifications, equivalents and improvements that are within the spirit and scope of the invention as defined by the appended claims.
Claims (8)
1. A micro DNA extraction workstation workpiece detection method based on YOLOv5 is characterized by comprising the following steps:
acquiring workpiece picture data based on a workstation live-action;
marking the data acquired by the live-action scene, and performing format processing and division on the marked data;
and thirdly, based on a YOLOv5 network algorithm, performing iterative training by using a pre-trained weight model, and continuously adjusting weight parameters of the model by using training set and verification set data.
And fourthly, based on a YOLOv5 network algorithm, storing the weight file after training, judging through comparing the evaluation indexes of the verification set of the model, and selecting the optimal model according to the judgment result to identify and detect the workpiece.
2. The method for detecting the workpiece of the YOLOv 5-based micro DNA extraction workstation as claimed in claim 1, wherein in step one, a photo of the workpiece sample is taken by using a MER-500-14U3C color camera of Da Heng Galaxy series to perform self-construction of the data set. In order to ensure sample diversity, pictures of different conditions such as illumination, angles, distances and shielding are collected in the data collection stage, and the total number of the workpiece images is 1200.
3. The method for detecting the workpiece of the Yolov 5-based micro DNA extraction workstation as claimed in claim 1, wherein in the second step, the data set is in PASAL VOC format, the target object in the picture is labeled by using a labelImg tool, and the label file includes the matrix coordinate parameters of the real area. The marked file takes xml as a suffix, the file name is consistent with the picture name, and the file name is as follows: the 2: 2 ratio divides the data set into a training set, a validation set, and a test set.
4. The method for workpiece inspection of micro DNA extraction workstation based on YOLOv5 of claim 1, wherein in step three, the workpiece inspection of workstation is performed by YOLOv5, and the pre-trained weight model on coco and voc data sets is used in training set training process. Training data and weight parameters are saved in each training, and training results are visualized by wandb tracking the training process. The hyper-parameters are set as follows: BatchSize is 8, epoch of training is 300, momentum factor is 0.937, weight attenuation coefficient is 0.0005, initial learning rate is 0.01, learning rate is dynamically adjusted by adopting cosine annealing strategy, and Loss function is GIoU Loss.
5. The method for detecting the workpiece of the workstation based on the extraction of the micro-DNA from the YOLOv5 as claimed in claim 1, wherein in the fourth step, the workpiece detection of the workstation is performed by using YOLOv5, and in order to measure the performance of the model, the evaluation criteria commonly used in the target detection are adopted to evaluate the algorithm, wherein three common evaluation indexes are used, namely mAP, Precision and Recall. And selecting a trained YOLOv5 optimal model to input into a test set for workpiece identification and test by verifying the performance of a set feedback model, wherein best.pt in YOLOv5x.pt of YOLOv5 is selected as the model weight.
6. The method as claimed in claim 1 or 3, wherein YOLOv 5-based micro DNA extraction workstation workpiece detection method is characterized in that YOLOv5 performs data enhancement during training, pictures are spliced in a random scaling, clipping and arrangement mode, adaptive scaling is performed on the pictures after each training or several rounds, and finally a K-means mean algorithm is added before network operation to recalculate an initial anchor frame for a target of the research, so as to replace a result obtained by automatic anchor frame function calculation of a network self-band. When entering a Backbone stage, a Focus downsampling structure is adopted, the Focus downsampling structure comprises 4 slicing operations and 1 convolution operation of 32 convolution kernels, the size of an original image is changed into half of the original size, a channel is changed into 4 times of the original size, and although the size is reduced, the information integrity of the image can be guaranteed. Meanwhile, a CSPNet local cross-layer fusion structure is adopted, a richer feature map is obtained by utilizing the optimization process of the network, and meanwhile, the gradient transformation process is concentrated in the feature map, so that the calculated amount is reduced to a certain extent. The network inserts the Neck layer before outputting a prediction result, so that better fusion of features is ensured, the CSP2 module is adopted to strengthen feature fusion, the FPN network is adopted to transmit high-level feature information from top to bottom and output features of CSP modules in different layers in an upsampling mode, and the fusion PAN network aggregates shallow features through a feature pyramid from bottom to top. The SPP module is adopted to perform channel splicing on feature maps with different scales obtained after maxporoling operation, so that context features are obviously separated, the acceptance range of trunk features is more effectively increased, parameter fusion of different detection layers is performed on different trunk layers, and the output prediction result is more accurate. And modifying the traditional non-maximum value suppression NMS when the YOLOv5 is output, and repairing the condition that the traditional NMS ignores the overlapped target when the overlapped target is identified by using a DIOU-NMS, wherein the DIOU algorithm can regress the position information of the central point of the boundary box of the overlapped target.
7. The method for detecting the workpiece of the Yolov 5-based micro DNA extraction workstation, according to claim 1 or 3, wherein the parameter training is performed by using SGD optimization algorithm by default. The hyper-parameters are set as follows: BatchSize is 8, epoch of training is 300, momentum factor is 0.937, weight attenuation coefficient is 0.0005, initial learning rate is 0.01, learning rate is dynamically adjusted by adopting cosine annealing strategy, and Loss function is GIoU Loss.
Wherein the mAP formula is:where Q is the total number of classes and AP is the area under the Precision-Recall curve.
Precision formula is:where TP is the number of positive samples predicted to be positive and FP is the number of negative samples predicted to be positive.
8. The method for detecting the micro DNA extraction workstation workpiece based on YOLOv5 as claimed in any one of claims 1 to 7, and the application of the method in image processing and target detection of traffic signs, face detection, target tracking, medical images, defect occlusion detection and the like.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111195733.1A CN113850799B (en) | 2021-10-14 | 2021-10-14 | YOLOv 5-based trace DNA extraction workstation workpiece detection method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111195733.1A CN113850799B (en) | 2021-10-14 | 2021-10-14 | YOLOv 5-based trace DNA extraction workstation workpiece detection method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113850799A true CN113850799A (en) | 2021-12-28 |
CN113850799B CN113850799B (en) | 2024-06-07 |
Family
ID=78978203
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111195733.1A Active CN113850799B (en) | 2021-10-14 | 2021-10-14 | YOLOv 5-based trace DNA extraction workstation workpiece detection method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113850799B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114359258A (en) * | 2022-01-17 | 2022-04-15 | 华中科技大学 | Method, device and system for detecting target part of infrared moving object |
CN114898458A (en) * | 2022-04-15 | 2022-08-12 | 中国兵器装备集团自动化研究所有限公司 | Factory floor number monitoring method, system, terminal and medium based on image processing |
CN115273017A (en) * | 2022-04-29 | 2022-11-01 | 桂林电子科技大学 | Traffic sign detection recognition model training method and system based on Yolov5 |
CN115578662A (en) * | 2022-11-23 | 2023-01-06 | 国网智能科技股份有限公司 | Unmanned aerial vehicle front-end image processing method, system, storage medium and equipment |
CN115965582A (en) * | 2022-11-22 | 2023-04-14 | 哈尔滨岛田大鹏工业股份有限公司 | Ultrahigh-resolution-based engine cylinder body and cylinder cover surface defect detection method |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111476756A (en) * | 2020-03-09 | 2020-07-31 | 重庆大学 | Method for identifying casting DR image loose defects based on improved YO L Ov3 network model |
WO2020181685A1 (en) * | 2019-03-12 | 2020-09-17 | 南京邮电大学 | Vehicle-mounted video target detection method based on deep learning |
CN112270252A (en) * | 2020-10-26 | 2021-01-26 | 西安工程大学 | Multi-vehicle target identification method for improving YOLOv2 model |
-
2021
- 2021-10-14 CN CN202111195733.1A patent/CN113850799B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020181685A1 (en) * | 2019-03-12 | 2020-09-17 | 南京邮电大学 | Vehicle-mounted video target detection method based on deep learning |
CN111476756A (en) * | 2020-03-09 | 2020-07-31 | 重庆大学 | Method for identifying casting DR image loose defects based on improved YO L Ov3 network model |
CN112270252A (en) * | 2020-10-26 | 2021-01-26 | 西安工程大学 | Multi-vehicle target identification method for improving YOLOv2 model |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114359258A (en) * | 2022-01-17 | 2022-04-15 | 华中科技大学 | Method, device and system for detecting target part of infrared moving object |
CN114898458A (en) * | 2022-04-15 | 2022-08-12 | 中国兵器装备集团自动化研究所有限公司 | Factory floor number monitoring method, system, terminal and medium based on image processing |
CN115273017A (en) * | 2022-04-29 | 2022-11-01 | 桂林电子科技大学 | Traffic sign detection recognition model training method and system based on Yolov5 |
CN115965582A (en) * | 2022-11-22 | 2023-04-14 | 哈尔滨岛田大鹏工业股份有限公司 | Ultrahigh-resolution-based engine cylinder body and cylinder cover surface defect detection method |
CN115965582B (en) * | 2022-11-22 | 2024-03-08 | 哈尔滨岛田大鹏工业股份有限公司 | Ultrahigh-resolution-based method for detecting surface defects of cylinder body and cylinder cover of engine |
CN115578662A (en) * | 2022-11-23 | 2023-01-06 | 国网智能科技股份有限公司 | Unmanned aerial vehicle front-end image processing method, system, storage medium and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN113850799B (en) | 2024-06-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113850799B (en) | YOLOv 5-based trace DNA extraction workstation workpiece detection method | |
CN111368690B (en) | Deep learning-based video image ship detection method and system under influence of sea waves | |
CN103324937B (en) | The method and apparatus of label target | |
CN112204674A (en) | Method for identifying biological material by microscopy | |
CN109284779A (en) | Object detection method based on deep full convolution network | |
CN113129335B (en) | Visual tracking algorithm and multi-template updating strategy based on twin network | |
CN110543912A (en) | Method for automatically acquiring cardiac cycle video in fetal key section ultrasonic video | |
Yang et al. | Instance segmentation and classification method for plant leaf images based on ISC-MRCNN and APS-DCCNN | |
Xing et al. | Traffic sign recognition using guided image filtering | |
CN113128335A (en) | Method, system and application for detecting, classifying and discovering micro-body paleontological fossil image | |
CN117152484B (en) | Small target cloth flaw detection method based on improved YOLOv5s | |
Wang et al. | Improved YOLOv3 detection method for PCB plug-in solder joint defects based on ordered probability density weighting and attention mechanism | |
CN109598712A (en) | Quality determining method, device, server and the storage medium of plastic foam cutlery box | |
CN117649526A (en) | High-precision semantic segmentation method for automatic driving road scene | |
CN110991300B (en) | Automatic identification method for abnormal swelling state of dorking abdomen | |
Shishkin et al. | Implementation of yolov5 for detection and classification of microplastics and microorganisms in marine environment | |
CN116310902A (en) | Unmanned aerial vehicle target detection method and system based on lightweight neural network | |
Shen et al. | Car plate detection based on Yolov3 | |
CN114529852A (en) | Video data-based carry-over detection and analysis method | |
Min et al. | Vehicle detection method based on deep learning and multi-layer feature fusion | |
Cheng et al. | RETRACTED ARTICLE: Capacitance pin defect detection based on deep learning | |
Senthilnayaki et al. | Traffic Sign Prediction and Classification Using Image Processing Techniques | |
CN117593514B (en) | Image target detection method and system based on deep principal component analysis assistance | |
CN111259843B (en) | Multimedia navigator testing method based on visual stability feature classification registration | |
CN116503406B (en) | Hydraulic engineering information management system based on big data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |