CN113610050A - Mask wearing real-time detection method based on YOLOv5 - Google Patents

Mask wearing real-time detection method based on YOLOv5 Download PDF

Info

Publication number
CN113610050A
CN113610050A CN202110985214.9A CN202110985214A CN113610050A CN 113610050 A CN113610050 A CN 113610050A CN 202110985214 A CN202110985214 A CN 202110985214A CN 113610050 A CN113610050 A CN 113610050A
Authority
CN
China
Prior art keywords
mask
yolov5
detection
data set
worn
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110985214.9A
Other languages
Chinese (zh)
Inventor
贾慧杰
肖中俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qilu University of Technology
Original Assignee
Qilu University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qilu University of Technology filed Critical Qilu University of Technology
Priority to CN202110985214.9A priority Critical patent/CN113610050A/en
Publication of CN113610050A publication Critical patent/CN113610050A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a mask wearing real-time detection method based on YOLOv5, and belongs to the field of machine learning. The method comprises the following steps: the method comprises the steps of manufacturing a data set for mask wearing detection, analyzing a problem that a YOLOv5 model is used for mask detection, optimizing a YOLOv5 network aiming at the problem, calculating an anchor parameter by adopting a K-means + + algorithm, modifying an original Loss function, introducing CIOU _ Loss as Bounding Box Regression Loss of the algorithm Loss function, and introducing DIOU _ NMS in a post-processing process of target detection to replace original weighted non-maximum suppression (NMS). The algorithm improves the mask detection accuracy at the dense position of people and reduces the condition of missed detection caused by shielding.

Description

Mask wearing real-time detection method based on YOLOv5
Technical Field
The invention relates to the field of machine learning, in particular to a mask wearing real-time detection method based on YOLOv 5.
Background
After the new coronary pneumonia epidemic situation outbreak, wearing the mask in public places such as stations, markets and the like becomes an effective means for preventing the epidemic situation. Therefore, whether the mask is worn by a person or not needs to be detected in a public place, if the person simply relies on human eyes to observe, a large amount of manpower is consumed, and the condition of missing detection easily occurs in a place with dense persons. Therefore, it becomes of great practical significance to realize real-time detection of mask wearing.
In recent years, the conventional image processing technology has slow speed, poor stability and reduced accuracy when the environment changes in target detection application, and with the rapid development of deep learning and machine vision, a target detection algorithm based on deep learning is widely applied. The real-time detection of wearing the mask has the problems of strong real-time performance, small target object, easy shielding and the like, so that a target detection algorithm which is rapid and high in accuracy for the shielded small target is needed. A single-stage target detection algorithm proposed by YOLOv5 in 5.2020 has a fast inference speed and a compact network structure, and the model structure of YOLOv5 is mainly divided into four parts, namely an Input end, a backhaul basic network, a neutral network and a Prediction output layer. However, the small object, which is easily blocked, such as the mask, is easily missed. Therefore, the invention is improved and optimized on the basis of YOLOv5, and provides a novel mask wearing real-time detection method of YOLOv 5.
Disclosure of Invention
In view of all the problems, the invention provides a novel mask wearing real-time detection method of YOLOv5, which solves the problems of speed and accuracy of the existing detection method for guests.
A mask wearing real-time detection method based on YOLOv5 is characterized by comprising the following steps:
step 1: making a data set for mask wearing detection;
step 2: building a YOLOv5 network framework;
and step 3: training a data set of mask wearing detection by using a YOLOv5 network;
and 4, step 4: the trained novel YOLOv5 model is used for real-time detection of mask wearing.
Preferably, step 1 specifically comprises:
step 1.1: firstly, collecting two thousand photos of a correctly worn mask and a not worn mask;
step 1.2: enhancing the data set by using data enhancement methods such as rotation, cutting and the like on the basis of the original data set, and expanding the data set to 5000 photos;
step 1.3: labeling 5000 pictures with Labeling.
Preferably, step 2 specifically comprises:
step 2.1: using CIOU _ Loss as a Loss function of the Bounding box, which is defined as:
Figure BDA0003230403600000021
CIOU _ Loss takes into account coverage area, center point distance, and aspect ratio. Where α is a weight coefficient, v represents a distance of an aspect ratio of the detection frame and the real frame, b and bgtThe center points of the prediction frame with the mask worn and the prediction frame without the mask worn are respectively represented as categories, p represents the Euclidean distance, and c represents the diagonal distance of the minimum circumscribed rectangle of the target. The expressions for α and v are:
Figure BDA0003230403600000022
Figure BDA0003230403600000023
step 2.2: the novel nonolv 5 non-maximum suppression method adopts DIOU _ Loss, which considers not only the IOU but also the distance between two central points, and when the distance between the central points of two frames is relatively large, the two frames are considered as frames of two objects and cannot be filtered due to occlusion.
Preferably, step 3 specifically comprises:
step 3.1: and (3) clustering the height and width of the target box of the data set by adopting a K-Means + + algorithm on the data set marked in the step 1 so as to determine the optimal value of the anchor parameter in the model.
Step 3.2: the labeled data set is as follows 9: 1 into a training set and a test set, and inputting the anchor parameters calculated in step 3.1 into the network.
Preferably, step 3.3: setting network training parameters: the iteration batch is set to be 128, the attenuation coefficient is 0.0005, the total iteration times are 500, the initial learning rate is 0.001, the learning rate is reduced to 0.0001 in 400 iterations, and the learning rate is reduced to 0.00001 in 450 iterations.
Preferably, the new YOLOv5 model trained in step 3 is subjected to reasoning test in step 4.
The test of step 4 was evaluated and tested, and the specific formula is shown below.
Figure BDA0003230403600000031
Figure BDA0003230403600000032
Figure BDA0003230403600000033
Figure BDA0003230403600000034
Wherein TP represents the number of samples for which the model predicts that the mask is worn correctly, FP represents the number of samples for which the mask is not worn and FN represents the number of samples for which the mask is not worn. M represents the number of categories, i ∈ (1, M). Accuracy (Precision), Recall (Recall), and mean accuracy (mAP). The higher the accuracy and the recall rate are, the better the detection effect of the model on the mask is, the mAP is an important index for evaluating the performance of the model, and the higher the mAP value is, the better the performance of the model is.
Drawings
FIG. 1 is a flow chart of the method of the present invention
FIG. 2 is a diagram of the YOLOv5 network structure of the present invention
FIG. 3 shows the variation of parameters in the training process of the present invention
Fig. 4 shows the partial image detection result of the present invention.
Detailed Description
The following describes a mask wearing real-time detection method of the new YOLOv5 in accordance with the present invention with reference to the drawings of the present invention.
A novel YOLOv5 real-time mask wearing detection method is shown in a specific flow chart in figure 1 and comprises the following specific steps:
step 1: two thousand pictures of wearing the mask and the mask not to be worn are collected by using a web crawler, and the pictures are expanded by using methods of rotation, cutting, splicing and the like aiming at the problem of small number of the pictures.
Labeling the picture by using Labeling, Labeling the position of the face in the picture by using a rectangular frame, Labeling the types of the face as a worn mask and an unworn mask, and Labeling the face in a YOLO format.
And (3) the marked files are processed according to the following steps of 9: 1 into a training set and a test set.
The above operations are completed, and a data set for mask wearing detection is correctly manufactured.
Step 2: analyzing a YOLOv5 model;
the YOLOv5 provides a single-stage target detection algorithm in 5 months of 2020, which is based on YOLOv4 and is improved in an input end, a backhaul basic network, a Neck network and an output layer, so that the algorithm has a faster inference speed and a smaller network structure.
As can be seen from fig. 2, the model structure of YOLOv5 is mainly divided into four parts, i.e., Input terminal, backhaul basic network, Neck network, and Prediction output layer.
The inputs to YOLOv5 include Mosaic data enhancement and adaptive picture scaling. The Mosaic data enhancement algorithm at the input end of YOLOv5 is an improvement on the CutMix data enhancement algorithm, the splicing of two pictures is improved into the splicing of four pictures, and the splicing mode is also improved into random zooming, arranging and cropping. In the target detection process, too much information which is not used in the process of scaling due to different aspect ratios of the pictures is avoided, and the inference speed of the model is slowed due to the too much information. The YOLOv5 modifies the image, self-adaptive picture scaling is adopted, and the inference speed of the model can be improved by adding black edges as little as possible in the process of image scaling.
The Backbone basic network comprises a Focus structure and a CSP structure. Wherein the Focus module performs a slicing operation on the picture before the picture enters the backbone in the Yolov 5. Taking yolov5s as an example, an original 640 × 3 image is input into a Focus structure, and is changed into a 320 × 12 feature map by a slicing operation, and is changed into a 320 × 32 feature map by a convolution operation, and finally, a double-sampling feature map without information loss is obtained.
Two CSP structures are designed in YOLOv5, wherein a Backbone basic network uses CSP1_ X, and a Neck network uses CSP2_ X.
The Neck network of YOLOv5 is improved on the basis of YOLOv4, and a CSP2_ X network replaces a common convolutional network, so that the capability of network feature fusion is enhanced. The structure of FPN + PAN is still used in the Neck network of YOLOv5, where FPN is from top to bottom, passing the strong semantic features above to below, but not passing localization information. PAN supplements the FPN by adding a feature pyramid from bottom to top after the FPN and transmitting the strong positioning information below to the top.
The Prediction output layer comprises Bounding Box Regression Loss and NMS. In Yolov5, the GIOU _ Loss therein is used as a Loss function of the Bounding box. In the post-processing process of target detection, a weighted NMS mode is adopted for screening a plurality of target boxes.
Through the analysis of the YOLOv5, the YOLOv5 finds that the detection of the target in a natural scene has better performance in terms of accuracy and speed, but the accuracy of the problem that the small target is a mask and is easily shielded in a crowd-dense place is reduced to some extent, and the phenomenon of missed detection is easily caused.
And step 3: improved optimization of the YOLOv5 model
In order to solve the problems, the invention improves YOLOv5, so that the mask wearing detection accuracy is improved, and whether the mask is worn by the shielded dense people can be better detected.
Improvement of initial candidate box
In the YOLO algorithm, there are anchor boxes with initial set length and width for different data sets. In the network training, the network outputs a prediction frame on the basis of an initial anchor frame, and then compares the prediction frame with a real frame ground route, calculates the difference between the prediction frame and the real frame ground route, and then reversely updates and iterates network parameters. The parameter values of the initial candidate box are therefore crucial for the training of the following YOLOv 5. The algorithm adopts a K-Means + + algorithm to cluster the height and width of the target frame marked by the training set so as to determine the optimal value of the anchor parameter in the model.
Improvement of loss function
The yollov 5 model uses GIOU _ Loss as a Loss function of the Bounding box, as shown in formula (1), GIOU _ Loss solves the problem that Loss is 0 when the detection frame and the real frame do not intersect in IOU _ Loss, but the convergence speed is slow, and when the real frame and the detection frame overlap each other, the GIOU _ Loss is a constant value no matter where the positions of the two frames are.
Figure BDA0003230403600000061
To solve these problems, CIOU _ Loss is used herein as a Loss function of the Bounding box, as shown in formula (2), where α is a weight coefficient, v represents a distance between the aspect ratio of the detection box and the real box, and bgtRespectively representing the central points of a prediction frame with a worn mask and a prediction frame without the worn mask, wherein rho represents the Euclidean distance, c represents the diagonal distance of the minimum circumscribed rectangle of the target, and w represents the distance between the target and the targetgtIs the length of the detection frame, hgtIt is the width of the detection frame, w is the length of the real frame, and h is the width of the real frame. CIOU _ Loss takes into account coverage area, center point distance, and aspect ratio.
Figure BDA0003230403600000062
Figure BDA0003230403600000063
Figure BDA0003230403600000064
Improvements in NMS
YOLOv5 employs a weighted NMS, with IOU being the only factor to consider in the weighted NMS algorithm. However, for mask detection, when two persons are close to each other, the mask is easily blocked, and because the IOU value is relatively large, only one detection frame is left after NMS processing, which causes an error condition of missed detection. This is improved by replacing the weighted NMS with a DIOU _ NMS that considers not only the IOU but also the distance between the center points of the two boxes, which may be considered as a box of two objects and not filtered when the center distance of the two boxes is larger.
And 4, step 4: training a novel YOLOv5 network
The experimental environment configuration of the novel mask wearing real-time detection method of YOLOv5 adopts a Windows10 operating system, an NVIDIA GeForce RTX 2060 display card and a PyTorch deep learning frame. The specific configuration is shown in table 1:
table 1 experimental environment configuration
Figure BDA0003230403600000071
Setting network training parameters: the iteration batch is set to be 128, the attenuation coefficient is 0.0005, the total iteration times are 500, the initial learning rate is 0.001, the learning rate is reduced to 0.0001 in 400 iterations, and the learning rate is reduced to 0.00001 in 450 iterations.
Inputting the anchor value determined by the K-Means + + algorithm in the step 3 into a novel YOLOv5, and using the data set produced in the step 1 for training.
And 5: evaluation and testing of the novel YOLOv5 model
The model evaluation indexes adopted herein include: accuracy (Precision), Recall (Recall), and mean accuracy (mAP). The higher the accuracy and the recall rate are, the better the detection effect of the model on the mask is, the mAP is an important index for evaluating the performance of the model, and the higher the mAP value is, the better the performance of the model is. The specific formula is as follows.
Figure BDA0003230403600000072
Figure BDA0003230403600000073
Figure BDA0003230403600000074
Figure BDA0003230403600000075
Wherein TP represents the number of samples for which the model predicts that the mask is worn correctly, FP represents the number of samples for which the mask is not worn and FN represents the number of samples for which the mask is not worn. M represents the number of categories, i ∈ (1, M).
The change of each evaluation index in the model training process is shown in fig. 3, after the model training is completed, the test set is used for testing, the accuracy of the test set reaches 96.5%, and the mAP reaches 93.1%. Comparing the algorithm with YOLOv5, the accuracy is improved by 4.5%, the mAP is improved by 1.1%, and the feasibility of the algorithm is proved by data test pair as shown in table 2.
TABLE 2 comparison of algorithmic Performance
Figure BDA0003230403600000081
The partial image test results are shown in fig. 4, and show that: the condition that the mask is shielded at the position where the small target and the crowd are dense can be well detected, and the condition of missed detection is reduced.
Aiming at the condition that the accuracy rate is reduced and missed detection occurs when the YOLOv5 detects a small easily-shielded object such as a mask, the method improves the selection of an initial candidate frame on the basis of YOLOv5, replaces a loss function and non-maximum value inhibition, and proves feasibility through experiments. Although particular embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these particular embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (6)

1. A mask wearing real-time detection method based on YOLOv5 is characterized by comprising the following steps:
step 1: making a data set for mask wearing detection;
step 2: building a YOLOv5 network framework;
and step 3: training a data set of mask wearing detection by using a YOLOv5 network;
and 4, step 4: the trained Yolov5 model was used in real-time mask wear detection.
2. The method for detecting wearing of a mask in real time based on YOLOv5 as claimed in claim 1, wherein the step of creating the data set in step 1 comprises the following steps:
step 1.1: firstly, collecting two thousand photos of a correctly worn mask and a not worn mask;
step 1.2: enhancing the data set by using a rotating and cutting data enhancement method on the basis of the original data set, and expanding the data set to 5000 photos;
step 1.3: labeling 5000 pictures with Labeling.
3. The method for detecting wearing of a mask in real time based on YOLOv5 as claimed in claim 1, wherein step 2 specifically comprises:
step 2.1: using CIOU _ Loss as a Loss function of the Bounding box, which is defined as:
Figure FDA0003230403590000011
CIOU _ Loss takes into account coverage area, center point distance, and aspect ratio; where α is a weight coefficient, v represents a distance of an aspect ratio of the detection frame and the real frame, b and bgtRespectively representing that the types are central points of a prediction frame with a mask and a prediction frame without the mask, rho represents an Euclidean distance, C represents a diagonal distance of a minimum circumscribed rectangle of a target, IOU represents the intersection area ratio of the two frames and the union area of the two frames, and the expression of alpha and v is as follows:
Figure FDA0003230403590000012
Figure FDA0003230403590000013
step 2.2: the novel nonolv 5 non-maximum suppression method adopts DIOU _ Loss, which considers not only IoU but also the distance between two central points, and when the distance between the central points of two frames is relatively large, the two frames are considered as frames of two objects and cannot be filtered due to occlusion.
4. The method for detecting wearing of a mask in real time based on YOLOv5 as claimed in claim 2, wherein step 3 specifically comprises:
step 3.1: clustering the height and width of a target frame of the data set by adopting a K-Means + + algorithm on the data set marked in the step 1 so as to determine the optimal value of the anchor parameter in the model;
step 3.2: the labeled data set is as follows 9: 1, dividing the parameters into a training set and a testing set, and inputting the anchor parameters calculated in the step 3.1 into a network;
step 3.3: setting network training parameters: the iteration batch is set to be 128, the attenuation coefficient is 0.0005, the total iteration times are 500, the initial learning rate is 0.001, the learning rate is reduced to 0.0001 in 400 iterations, and the learning rate is reduced to 0.00001 in 450 iterations.
5. The method for detecting wearing of a mask in real time based on YOLOv5 as claimed in claim 1, wherein the YOLOv5 model trained in step 3 is subjected to inference test in step 4.
6. The method for detecting wearing of a mask in real time based on YOLOv5 as claimed in claim 1, wherein the detection of step 4 is evaluated and tested,
Figure FDA0003230403590000021
Figure FDA0003230403590000022
Figure FDA0003230403590000023
Figure FDA0003230403590000024
wherein Precision is accuracy, Recall is recalling rate, mAP is average Precision value, TP represents the number of samples of the mask which is correctly worn predicted by the model, FP represents the number of samples of the mask which is not worn and identified as worn, and FN represents the number of samples of the mask which is correctly worn and identified as not worn. M represents the number of categories, i ∈ (1, M).
CN202110985214.9A 2021-08-26 2021-08-26 Mask wearing real-time detection method based on YOLOv5 Pending CN113610050A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110985214.9A CN113610050A (en) 2021-08-26 2021-08-26 Mask wearing real-time detection method based on YOLOv5

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110985214.9A CN113610050A (en) 2021-08-26 2021-08-26 Mask wearing real-time detection method based on YOLOv5

Publications (1)

Publication Number Publication Date
CN113610050A true CN113610050A (en) 2021-11-05

Family

ID=78342072

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110985214.9A Pending CN113610050A (en) 2021-08-26 2021-08-26 Mask wearing real-time detection method based on YOLOv5

Country Status (1)

Country Link
CN (1) CN113610050A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241548A (en) * 2021-11-22 2022-03-25 电子科技大学 Small target detection algorithm based on improved YOLOv5
CN114387484A (en) * 2022-01-11 2022-04-22 华南农业大学 Improved mask wearing detection method and system based on yolov4
CN114399799A (en) * 2021-11-22 2022-04-26 电子科技大学 Mask wearing detection method based on YOLOv5 network
CN115909457A (en) * 2022-11-23 2023-04-04 大连工业大学 Mask wearing detection method based on polarization imaging AI recognition

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287827A (en) * 2020-10-29 2021-01-29 南通中铁华宇电气有限公司 Complex environment pedestrian mask wearing detection method and system based on intelligent lamp pole
CN112766188A (en) * 2021-01-25 2021-05-07 浙江科技学院 Small-target pedestrian detection method based on improved YOLO algorithm
CN112819804A (en) * 2021-02-23 2021-05-18 西北工业大学 Insulator defect detection method based on improved YOLOv5 convolutional neural network
CN113221670A (en) * 2021-04-21 2021-08-06 成都理工大学 Technology for mask wearing identification

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112287827A (en) * 2020-10-29 2021-01-29 南通中铁华宇电气有限公司 Complex environment pedestrian mask wearing detection method and system based on intelligent lamp pole
CN112766188A (en) * 2021-01-25 2021-05-07 浙江科技学院 Small-target pedestrian detection method based on improved YOLO algorithm
CN112819804A (en) * 2021-02-23 2021-05-18 西北工业大学 Insulator defect detection method based on improved YOLOv5 convolutional neural network
CN113221670A (en) * 2021-04-21 2021-08-06 成都理工大学 Technology for mask wearing identification

Non-Patent Citations (6)

* Cited by examiner, † Cited by third party
Title
XIAOKANG REN等: ""Mask wearing detection based on YOLOv3"", 《JOURNAL OF PHYSICS: CONFERENCE SERIES》 *
张宏群等: ""基于YOLOv5的遥感图像舰船的检测方法"", 《电子测量技术》 *
王兵等: ""改进YOLO轻量化网络的口罩检测算法"", 《计算机工程与应用》 *
王沣: ""改进 yolov5 的口罩和安全帽佩戴人工智能检测识别算法"", 《建筑与预算》 *
肖博健等: ""采用YOLOV5模型的口罩佩戴识别研究"", 《福建电脑》 *
谈世磊等: ""基于YOLOv5网络模型的人员口罩佩戴实时检测"", 《激光杂志》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114241548A (en) * 2021-11-22 2022-03-25 电子科技大学 Small target detection algorithm based on improved YOLOv5
CN114399799A (en) * 2021-11-22 2022-04-26 电子科技大学 Mask wearing detection method based on YOLOv5 network
CN114387484A (en) * 2022-01-11 2022-04-22 华南农业大学 Improved mask wearing detection method and system based on yolov4
CN114387484B (en) * 2022-01-11 2024-04-16 华南农业大学 Improved mask wearing detection method and system based on yolov4
CN115909457A (en) * 2022-11-23 2023-04-04 大连工业大学 Mask wearing detection method based on polarization imaging AI recognition

Similar Documents

Publication Publication Date Title
CN111353413B (en) Low-missing-report-rate defect identification method for power transmission equipment
CN113610050A (en) Mask wearing real-time detection method based on YOLOv5
KR102171122B1 (en) Vessel detection method and system based on multidimensional features of scene
CN110688925B (en) Cascade target identification method and system based on deep learning
CN107133943B (en) A kind of visible detection method of stockbridge damper defects detection
CN110135243B (en) Pedestrian detection method and system based on two-stage attention mechanism
US20210243362A1 (en) Techniques for enhanced image capture using a computer-vision network
CN109447168A (en) A kind of safety cap wearing detection method detected based on depth characteristic and video object
CN101588459A (en) A kind of video keying processing method
CN111222478A (en) Construction site safety protection detection method and system
CN108320306B (en) Video target tracking method fusing TLD and KCF
CN113536972B (en) Self-supervision cross-domain crowd counting method based on target domain pseudo label
CN104346801A (en) Image-composition evaluating device, information processing device and method thereof
CN107122792B (en) Indoor arrangement estimation method and system based on study prediction
CN102034247A (en) Motion capture method for binocular vision image based on background modeling
CN116485709A (en) Bridge concrete crack detection method based on YOLOv5 improved algorithm
CN111582074A (en) Monitoring video leaf occlusion detection method based on scene depth information perception
CN115331012B (en) Joint generation type image instance segmentation method and system based on zero sample learning
CN111881915A (en) Satellite video target intelligent detection method based on multiple prior information constraints
Wang et al. Archive film defect detection and removal: an automatic restoration framework
CN111027542A (en) Target detection method improved based on fast RCNN algorithm
Bisio et al. Traffic analysis through deep-learning-based image segmentation from UAV streaming
CN114399734A (en) Forest fire early warning method based on visual information
CN113052139A (en) Deep learning double-flow network-based climbing behavior detection method and system
CN110490170A (en) A kind of face candidate frame extracting method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211105