CN113553979A - Safety clothing detection method and system based on improved YOLO V5 - Google Patents

Safety clothing detection method and system based on improved YOLO V5 Download PDF

Info

Publication number
CN113553979A
CN113553979A CN202110871211.2A CN202110871211A CN113553979A CN 113553979 A CN113553979 A CN 113553979A CN 202110871211 A CN202110871211 A CN 202110871211A CN 113553979 A CN113553979 A CN 113553979A
Authority
CN
China
Prior art keywords
safety
yolo
wearing
detection
suit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110871211.2A
Other languages
Chinese (zh)
Other versions
CN113553979B (en
Inventor
于俊清
张培基
陈刚
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guodian Hanchuan Power Generation Co ltd
Huazhong University of Science and Technology
Original Assignee
Guodian Hanchuan Power Generation Co ltd
Huazhong University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guodian Hanchuan Power Generation Co ltd, Huazhong University of Science and Technology filed Critical Guodian Hanchuan Power Generation Co ltd
Priority to CN202110871211.2A priority Critical patent/CN113553979B/en
Publication of CN113553979A publication Critical patent/CN113553979A/en
Application granted granted Critical
Publication of CN113553979B publication Critical patent/CN113553979B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Mathematical Physics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Probability & Statistics with Applications (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a safety clothing detection method and system based on improved YOLO V5, and belongs to the field of target detection. The method comprises the following steps: training improvement YOLO V5 by adopting a safety clothing wearing state training set, wherein a training sample comprises a staff picture frame, and a label is a safety clothing wearing state to obtain a trained detection model; and inputting each frame of the industrial monitoring video into the trained detection model to obtain a safety suit detection result. The method uses different neural network structures to replace a backhaul module of the original YOLO V5 algorithm, adopts EfficientNet as the backhaul, and uniformly scales the width and the depth of the network structure and the resolution of an input image through the expansion coefficient of a composite model to obtain the effect of manually adjusting parameters better than the YOLO V5. By adopting ResNet50 as a backhaul, the characteristic information extracted by the network is completely retained to the next layer due to the addition of the residual block, and the gradient dispersion among network layers is effectively eliminated in the forward propagation process. And the ShuffleNet or the MobileNet is used as a Backbone, so that the complexity of a network structure and the volume of a model are reduced, and a lightweight model is obtained.

Description

Safety clothing detection method and system based on improved YOLO V5
Technical Field
The invention belongs to the field of target detection, and particularly relates to a safety clothing detection method and system based on improved YOLO V5.
Background
In industrial production, safety problems are of great importance, and the awareness of safe production is in depth. The safety suit is a protective suit which is worn by workers in a production operation area, and the body of the workers can be effectively protected by wearing the safety suit regularly, so that the damage of dangerous chemicals such as acid and alkali to the skin is reduced. The method for detecting the wearing state of the safety suit of the worker on line in real time based on the industrial monitoring video is an important guarantee for the life safety of the worker in an industrial scene, and is also important for standardizing industrial management and safety production. At present, most industrial management units mostly adopt a manual supervision method, and the original inspection and supervision mode is low in efficiency when workers entering and exiting a production operation area wear safety clothes for visual inspection.
Some worker safety clothes using a target detection algorithm are worn on-line detection systems gradually at present, but most of the aimed safety clothes targets are from reflective clothes vests obtained by network crawling or shooting and collecting in daily life scenes, and the aimed safety clothes targets in real industrial scenes are rare. The method can detect and identify the clear large and medium target safety clothes in the close view, but has higher identification difficulty for the safety clothes which are shielded by illumination change and shadow in the complex background of the real industrial scene. Therefore, it is very important to improve the detection accuracy of the safety suit target in the industrial monitoring video.
The current detection method mainly uses a computer vision technology and records images or videos of a working site facing a camera based on a vision method. Specifically, the human body detection is divided into a head part, an upper half part and a lower half part of a body trunk by using example segmentation, and a support vector machine is used for classifying whether a worker wears safety clothing or not after the characteristics of the three parts are extracted by the HOG. In recent years, as graphics processor hardware resources have grown and deep learning-related research has become increasingly deeper, deep learning-based object detection algorithms have become widely used in computer vision tasks. For example, an improved YOLOv 3-based algorithm is used for safety clothing detection, multi-scale detection is completed by expanding the input size of the original YOLOv3 image, and the accuracy is higher under different resolution tests. Or aiming at the problem of characteristic diversity caused by different clothes appearance shapes and material composition differences, the YOLOv4 two-stage detection algorithm is proposed, wherein YOLOv4 is used for detecting the clothes of the upper half body and the lower half body of a human body and carried articles, and the targets are classified in detail by a transfer learning method.
At present, many research schemes for safety clothing detection are available, but some problems still exist, which are mainly reflected in the following points:
1. the target detection technical means adopted by the existing solution is relatively laggard. Most of the algorithms adopted by researchers to complete related target detection tasks currently are improvements on the model of the YOLOv3 and YOLOv4 algorithms, and the specific use of the latest YOLOv5 algorithm is rarely researched in a related way in the near past;
2. most of safety clothing objects detected by the existing research scheme are from reflective clothing vests obtained by network crawling or shooting in daily life scenes, and a safety clothing target detection method facing to real industrial scenes is lacked;
3. the algorithm model in the existing research scheme has a poor detection effect on the safety suit target in a real industrial scene, the background of the real industrial scene is complex, the safety suit target object to be detected is easily interfered by various complex problems such as illumination change, multiple shelters, monitoring picture loss, motion blur and the like, and a high requirement is provided for the robustness of the algorithm model.
Disclosure of Invention
Aiming at the defects and the improvement requirements of the prior art, the invention provides a safety suit detection method and a safety suit detection system based on improved YOLO V5, and aims to improve the detection accuracy and the robustness of the safety suit target greatly on the premise that the real-time detection speed requirement is ensured by the improved algorithm model.
To achieve the above object, according to a first aspect of the present invention, there is provided a security suit detection method based on improved YOLO V5, the method comprising:
training improved YOLO V5 by adopting a safety clothing wearing state training set, wherein a training sample in the training set is a picture frame containing workers, and a label is the safety clothing wearing state of the workers to obtain a trained detection model; inputting each frame of the industrial monitoring video into the trained detection model to obtain a safety suit detection result;
the improved YOLO V5 comprises Input, improved Backbone, Neck and Prediction which are connected in series, wherein the improved Backbone is EfficientNet, ResNet50, ShuffleNet or MobileNet.
Preferably, the safety suit wearing state training set is constructed in the following way:
extracting a picture frame containing workers from a real industrial scene monitoring video;
according to staff safety suit wearing state, carry out the label to staff in every picture frame, the label content includes: whether to wear the safety suit and what color to wear the safety suit to obtain a safety suit wearing state detection data set and a safety suit wearing color detection data set.
Preferably, the picture frame is extracted as follows: setting the read first frame image of the monitoring video stream as a background frame, setting a static object as a background, and extracting a moving target object by using a background modeling algorithm; calculating the difference between the subsequent frame and the current background frame, if the difference is greater than a threshold value T, updating the background frame, otherwise, continuously reading until the video is finished; if the difference value is smaller than the threshold value T, calculating the moving contour area of the moving target object, if the moving contour area is larger than the threshold value T', saving the image frame, and otherwise, continuously reading the next frame.
Has the advantages that: the invention provides a key frame extraction algorithm, which converts the detection of pedestrians in the interesting segments of the monitoring video into the detection of moving target objects in the video images, automatically selects the monitoring video data and greatly improves the data preprocessing efficiency.
Preferably, LabelImg software is used to mark the wearing of safety gear by the worker in the training set of wearing states of the safety gear, the wearing of the safety gear correctly is marked as "safetyCloth", the wearing of No safety gear is marked as "No _ safetyCloth", the wearing of Green safety gear is marked as "Green", the wearing of White safety gear is marked as "White", the wearing of Orange safety gear is marked as "Orange", and the wearing of safety gear without fastening or zipper is marked as "SC _ Unzip" for the case where the worker is facing the camera on the front.
Preferably, Random Erase is adopted to randomly block the picture frame in the Input part: random occlusion blocks generated in the target object region produce different degrees of occlusion for the target object in the image, but do not completely occlude.
Has the advantages that: the safety clothing has a large target area and is easily shielded by various conditions in an industrial environment. The Random Erase data enhancement method simulates multi-shielding in an industrial environment through Random shielding of images in a Random Erase data enhancement mode, selects Random erasing in a training process to perform data expansion on positive samples in a safety suit target detection data set, and improves the robustness of an algorithm model.
Preferably, a GridMask data enhancement mode is adopted to randomly block the picture frame in the Input part: and generating a Mask with the same resolution as the original image, and multiplying to obtain a processed image.
Has the advantages that: the safety clothing has a large target area and is easily shielded by various conditions in an industrial environment. The invention simulates multi-shielding in the industrial environment by the random shielding of the image by the GridMask, and can control the area and the density of the shielded area by five parameters of x, y, w, h and I. By uniformly shielding the original image area, the information of the image area is discarded, and the method enables the model to learn more different components of the target object, thereby improving the training effect.
To achieve the above object, according to a second aspect of the present invention, there is provided a security suit detection system based on improved YOLO V5, comprising: a computer-readable storage medium and a processor;
the computer-readable storage medium is used for storing executable instructions;
the processor is configured to read executable instructions stored in the computer-readable storage medium, and execute the method for detecting a security suit based on improved YOLO V5 according to the first aspect.
Generally, by the above technical solution conceived by the present invention, the following beneficial effects can be obtained:
the invention adopts the EfficientNet neural network structure as the Backbone, and the width and the depth of the network structure and the resolution of the input image are uniformly scaled through the expansion coefficient of the composite model, so that the effect superior to the effect of manually adjusting parameters of YOLO V5 is obtained. By adopting a ResNet50 neural network structure as a backhaul, the problem of gradient loss caused by the fact that weights of different convolutional network layers cannot be updated timely in the feature transfer process in the deep expansion process of the YOLO V5 model is optimized due to the addition of a residual block. The characteristic information extracted from the input image by the network is completely reserved to the next layer in a mode of connection between the jump layers, and the problem of gradient dispersion between YOLO V5 network layers can be effectively solved in the forward propagation process. By adopting the ShuffleNet or MobileNet neural network structure as the backhaul, the complexity and the model volume of the YOLO V5 network structure can be reduced, and a lightweight model is realized, so that the requirements of actual industrial detection service requirements on the accuracy, the speed and the model deployment of the algorithm model are met.
Drawings
FIG. 1 is a flow chart of a security service detection method based on improved YOLO V5 provided by the present invention;
FIG. 2 is a schematic view of a sample of the wearing state and color detection of the safety garment according to the present invention;
FIG. 3 is a schematic diagram of a network structure of the YOLOv5 algorithm before optimization;
FIG. 4 is a graphical representation of the expansion coefficients of a composite model provided by the present invention;
FIG. 5 is a schematic diagram of an optimized YOLOv5+ ResNet-50 network structure provided by the present invention;
fig. 6 is a schematic diagram of the dual-Channel branch feature information exchange based on the shuffle Channel separation used in the present invention, in which (a) is to add Channel Split (Channel Split) operation, and (b) is not to perform Channel Split operation;
FIG. 7 is a schematic diagram of an inverse residual error network structure with linear bottleneck based on MobileNet used in the present invention;
FIG. 8 is a schematic representation of Random Erase data enhancement used in the present invention;
FIG. 9 is a schematic diagram of Grid Mask data enhancement used in the present invention;
FIG. 10 is a comparison of the detection accuracy of the YOLOv5 algorithm on an industrial safety service data set before and after optimization according to the present invention;
FIG. 11 is a comparison diagram of the detection results of the YOLOv5 algorithm on the industrial safety service data set before and after the optimization of the present invention;
fig. 12 is a schematic diagram showing comparison of target detection results of the YOLOv5 algorithm before and after optimization for the industrial safety suit data set which is irregular when the safety suit is worn.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. In addition, the technical features involved in the embodiments of the present invention described below may be combined with each other as long as they do not conflict with each other.
As shown in fig. 1, the invention provides a security clothing detection method based on improved YOLO V5, comprising the following steps:
s1, collecting monitoring video data in a real industrial scene, analyzing the collected data, judging whether the annotation content of the data set needs to be updated, if so, expanding and annotating the image content of the data set, and turning to S2; otherwise, the operation ends.
And step S2, updating the industrial safety suit data set, and marking the state of the worker wearing the safety suit and the color situation of the wearing safety suit in the data set image by adopting a LabelImg marking tool.
In step S2, the preprocessing method for the industrial monitoring video data includes: the method comprises the following steps of image background definition, video data screening, and image labeling and classification, and the specific method for preprocessing the industrial monitoring video data comprises the following steps:
and S21, acquiring the monitoring video data in the real industrial scene, judging whether the data set content needs to be updated or not, if not, ending the operation, and if so, carrying out the next operation.
S22, defining the image without the staff in the monitoring video image as the detection background, preprocessing the obtained industrial monitoring video data, screening out the background image frame in the monitoring video data, and reserving the image with the staff in the monitoring image.
S23, using LabelImg software, marking the situation that a worker wears safety clothes in the safety clothes data set, marking the situation that the worker wears safety clothes correctly as 'safetyCloth', does not wear the safety clothes as 'No _ safetyCloth', wears Green safety clothes as 'Green', wears White safety clothes as 'White', wears Orange safety clothes as 'Orange', and marks the situation that the worker wears the safety clothes with the front side facing the camera and the upper clothes is not fastened or the zipper as 'SC _ Unzip', as shown in FIG. 2.
And step S3, re-clustering analysis is carried out on the data set obtained by the S2 labeling by adopting a k-means clustering algorithm, the number and the size of anchors of a safety suit target detection box in the monitoring video under a real industrial scene are obtained, and the number and the size of the anchors of a YOLO V5 neural network are adjusted.
In step S3, the specific method for re-clustering analysis of the industrial safety service data set using the k-means-based clustering algorithm is as follows:
s31, based on the safety suit data set in the actual industrial scene marked by the S2, carrying out clustering analysis on the original pictures of the data set by adopting a k-means clustering algorithm to obtain the number and the size of new Anchor suitable for the safety suit in the industrial scene, and generating 9 groups of anchors with the sizes respectively as follows: (12 × 57), (18 × 84), (24 × 114), (31 × 144), (38 × 183), (51 × 219), (61 × 291), (75 × 366), (99 × 373), as shown in fig. 3, the YOLO V5 neural network structure prediction output end is composed of three detection heads, each detection head corresponds to a group of Anchor parameter values, and for an input image size of 640 × 640, # P3/8 network layer detection head has a dimension of 80 × 80 for detecting a small target with a dimension of 8 × 8, # P4/16 network layer detection head has a dimension of 40 × 40 for detecting a medium target with a dimension of 16 × 16, # P5/32 network layer detection head has a dimension of 20 × 20 for detecting a large target with a dimension of 32 × 32; the rules for matching the YOLO V5 detection head with the Anchor are that the first # P3/8 network layer detection head is matched with the first group of anchors [12,57,18,84,24,114], the second # P4/16 network layer detection head is matched with the second group of anchors [31,144,38,183,51,219], and the third # P5/32 network layer detection head is matched with the third group of anchors [61,291,75,366,99,373 ].
S32, replacing anchors generated by the original YOLO V5 network based on COCO data set clustering, wherein the original 9 groups of anchors are respectively as follows: (14 × 27), (23 × 46), (28 × 130), (39 × 148), (52 × 186), (62 × 279), (85 × 237), (88 × 360), (145 × 514), the original Anchor is not suitable for the security suit target detection task in the industrial scene. And in the training process of applying the new Anchor to the YOLO V5 network model, correcting the quantity and the size of the Anchor in the YOLO V5 network configuration file into parameters obtained by a k-means clustering algorithm.
And S4, optimizing the neural network structure of the Backbone module in the YOLO V5 algorithm model, and providing a model fusion algorithm based on YOLO V5 by considering the optimization mode of using deep neural network models EfficientNet and ResNet50 and replacing the original network structure in the Backbone module in the YOLO V5 by lightweight neural network models ShuffleNet and MobileNet, wherein the model fusion algorithm comprises multiple algorithm model combination modes of YOLO V5+ EfficientNet B8, YOLO V5+ ResNet50, YOLO V5+ ShuffleNet V2, YOLO V5+ MobileNet V3 and the like.
In step S4, the specific method for optimizing the neural network structure of the backhaul module in the YOLO V5 algorithm model is as follows:
s41, replacing the network structure of a Backbone module in the YOLO V5 algorithm by using a deep neural network model EfficientNet, and scaling the model by using the original YOLO V5 algorithm to manually adjust the depth and the width of the neural network to obtain four models of YOLO V5S, YOLO V5m, YOLO V5l and YOLO V5x, as shown in fig. 4, using the method of composite model expansion coefficient in EfficientNet, on the basis of baseline, the zoom of a YOLO V5 model is controlled by adjusting the depth (depth, d), the width (w) and the resolution (r) of an image, wherein the depth of the network is the layer number of a network structure, the deeper the network is, the more the extracted feature information is, the width of the network is, the number of channels (channels) of the network structure is, the more the number of channels is, the stronger the ability of extracting fine-grained features is, the resolution of the input image is, the area of the input image is, and the higher the resolution is, the less the information loss in the image is; and a mode of replacing manual adjustment of network parameters by the expansion coefficient of the composite model is adopted to obtain a better model scaling result, and an algorithm combination of YOLO V5+ efficiency B8 is provided.
S42, after the fusion of the YOLO V5 and the deep neural network model is considered, the model training error is not reduced along with the deepening of the number of network layers, the model training fitting is more difficult due to the fact that a residual network structure is lacked in the network, the network structure of a Backbone module in the YOLO V5 algorithm is replaced by the residual network structure, and as shown in FIG. 5, the YOLO V5+ ResNet50 algorithm model combination is provided.
S43, considering comprehensive factors such as accuracy, speed and model deployment usability of a practical industrial detection task requirement on an algorithm model, replacing a network structure of a backhaul module in a YOLO V5 algorithm by using a lightweight neural network model ShuffleNet, and providing a YOLO V5+ ShuffleNet V2 algorithm model combination by using a dual-channel characteristic information exchange strategy of channel splitting for an original YOLO V5 network as shown in FIG. 6.
S43, consider that the lightweight neural network model MobileNet is used to replace the network structure of the backhaul module in the YOLO V5 algorithm, as shown in fig. 7, add an inverse residual network structure with a linear bottleneck to the backhaul module network of the original YOLO V5, and propose a YOLO V5+ MobileNet V3 algorithm model combination.
And S5, training the optimized YOLO V5 model fusion algorithm to obtain an industrial monitoring video-oriented safety suit detection algorithm model.
In step S5, the improved YOLO V5 model fusion algorithm is trained, and the specific contents of the security suit detection algorithm model for the industrial surveillance video are obtained as follows:
s51, downloading configuration files of a pre-training model and a network structure from a YoLO V5 official network, wherein the configuration files contain default hyper-parameters and relevant weight values of YoLO V5 model training, and loading the default hyper-parameters and the relevant weight values into an improved YOLO V5 model fusion neural network.
S52, adjusting the description of the neural network structure in the YOLO V5 network configuration file according to the optimization method of S4, and in order to obtain the optimal effect of the YOLO V5 algorithm before and after optimization on the safety service data set, extending the data set in the model training process by adopting a Random Erase and Grid Mask data enhancement mode, wherein the Random Erase mode can be divided into an Image-aware Random Erase (IRE) mode which takes an Image background as perception and an Object-aware Random Erase (ORE) mode which takes a target Object as perception, and the Grid Mask data enhancement controls the area and the density of an occluded area by adjusting x, y, w, h and I parameters, as shown in FIGS. 8 and 9, until the model converges.
S53, as shown in FIG. 10, testing the detection accuracy and results of the YOLO V5 algorithm before and after optimization on the wearing state and color of the safety suit in the industrial safety suit data set respectively, as shown in FIG. 11, the algorithm before improvement has the situations of false detection and missing detection, and the optimized YOLO V5 model fusion algorithm has higher detection accuracy and robustness on the safety suit which is interfered by factors such as illumination change, shielding and the like in a complex background in a real industrial scene. And the YOLO V5 algorithm before and after optimization is tested, the front face of a worker faces a camera, the safety suit is worn at irregular target detection accuracy, the result shown in figure 12 is obtained, and the target object can obtain correct detection results under the conditions of illumination change, shadow and the like.
It will be understood by those skilled in the art that the foregoing is only a preferred embodiment of the present invention, and is not intended to limit the invention, and that any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (7)

1. A safety clothing detection method based on improved YOLO V5 is characterized by comprising the following steps:
training improved YOLO V5 by adopting a safety clothing wearing state training set, wherein a training sample in the training set is a picture frame containing workers, and a label is the safety clothing wearing state of the workers to obtain a trained detection model; inputting each frame of the industrial monitoring video into the trained detection model to obtain a safety suit detection result;
the improved YOLO V5 comprises Input, improved Backbone, Neck and Prediction which are connected in series, wherein the improved Backbone is EfficientNet, ResNet50, ShuffleNet or MobileNet.
2. The method of claim 1, wherein the safety-suit-as-worn state training set is constructed by:
extracting a picture frame containing workers from a real industrial scene monitoring video;
according to staff safety suit wearing state, carry out the label to staff in every picture frame, the label content includes: whether to wear the safety suit and what color to wear the safety suit to obtain a safety suit wearing state detection data set and a safety suit wearing color detection data set.
3. The method of claim 2, wherein the picture frames are extracted as follows: setting the read first frame image of the monitoring video stream as a background frame, setting a static object as a background, and extracting a moving target object by using a background modeling algorithm; calculating the difference between the subsequent frame and the current background frame, if the difference is greater than a threshold value T, updating the background frame, otherwise, continuously reading until the video is finished; if the difference value is smaller than the threshold value T, calculating the moving contour area of the moving target object, if the moving contour area is larger than the threshold value T', saving the image frame, and otherwise, continuously reading the next frame.
4. The method of claim 2, wherein LabelImg software is used to mark the wearing of safety gear by a worker in a training set of wearing states of the safety gear, the wearing of a safety gear is marked as "safetyCloth", the wearing of No safety gear is marked as "No _ safetyCloth", the wearing of a Green safety gear is marked as "Green", the wearing of a White safety gear is marked as "White", the wearing of an Orange safety gear is marked as "Orange", and the wearing of a safety gear without a top-up buckle or zipper is marked as "SC _ Unzip" for the wearing of a safety gear with a worker facing a camera on the front.
5. The method according to any of claims 1 to 4, wherein Random Erase data enhancement is used in the Input part to randomly block the picture frame: random occlusion blocks generated in the target object region produce different degrees of occlusion for the target object in the image, but do not completely occlude.
6. The method according to any one of claims 1 to 4, wherein the picture frame is randomly occluded by adopting GridMask data enhancement mode in an Input part: and generating a Mask with the same resolution as the original image, and multiplying to obtain a processed image.
7. A safety clothing detection system based on improved YOLO V5, characterized by comprising: a computer-readable storage medium and a processor;
the computer-readable storage medium is used for storing executable instructions;
the processor is used for reading executable instructions stored in the computer-readable storage medium and executing the improved YOLO V5-based security suit detection method of any one of claims 1 to 6.
CN202110871211.2A 2021-07-30 2021-07-30 Safety clothing detection method and system based on improved YOLO V5 Active CN113553979B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110871211.2A CN113553979B (en) 2021-07-30 2021-07-30 Safety clothing detection method and system based on improved YOLO V5

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110871211.2A CN113553979B (en) 2021-07-30 2021-07-30 Safety clothing detection method and system based on improved YOLO V5

Publications (2)

Publication Number Publication Date
CN113553979A true CN113553979A (en) 2021-10-26
CN113553979B CN113553979B (en) 2023-08-08

Family

ID=78104987

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110871211.2A Active CN113553979B (en) 2021-07-30 2021-07-30 Safety clothing detection method and system based on improved YOLO V5

Country Status (1)

Country Link
CN (1) CN113553979B (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114663407A (en) * 2022-03-29 2022-06-24 天地(常州)自动化股份有限公司 Coal gangue target detection method based on improved YOLOv5s model
CN114783000A (en) * 2022-06-15 2022-07-22 成都东方天呈智能科技有限公司 Method and device for detecting dressing standard of worker in bright kitchen range scene
CN115311261A (en) * 2022-10-08 2022-11-08 石家庄铁道大学 Method and system for detecting abnormality of cotter pin of suspension device of high-speed railway contact network
CN115330759A (en) * 2022-10-12 2022-11-11 浙江霖研精密科技有限公司 Method and device for calculating distance loss based on Hausdorff distance
CN116681660A (en) * 2023-05-18 2023-09-01 中国长江三峡集团有限公司 Target object defect detection method and device, electronic equipment and storage medium
CN117422696A (en) * 2023-11-08 2024-01-19 河北工程大学 Belt wear state detection method based on improved YOLOv8-Efficient Net
WO2024120245A1 (en) * 2022-12-06 2024-06-13 天翼数字生活科技有限公司 Video information summary generation method and apparatus, storage medium, and computer device

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810711A (en) * 2014-03-03 2014-05-21 郑州日兴电子科技有限公司 Keyframe extracting method and system for monitoring system videos
CN106446926A (en) * 2016-07-12 2017-02-22 重庆大学 Transformer station worker helmet wear detection method based on video analysis
WO2018130016A1 (en) * 2017-01-10 2018-07-19 哈尔滨工业大学深圳研究生院 Parking detection method and device based on monitoring video
CN109447168A (en) * 2018-11-05 2019-03-08 江苏德劭信息科技有限公司 A kind of safety cap wearing detection method detected based on depth characteristic and video object
CN109543542A (en) * 2018-10-24 2019-03-29 杭州叙简科技股份有限公司 A kind of determination method whether particular place personnel dressing standardizes
CN109635697A (en) * 2018-12-04 2019-04-16 国网浙江省电力有限公司电力科学研究院 Electric operating personnel safety dressing detection method based on YOLOv3 target detection
CN110070033A (en) * 2019-04-19 2019-07-30 山东大学 Safety cap wearing state detection method in a kind of power domain dangerous work region
CN110852283A (en) * 2019-11-14 2020-02-28 南京工程学院 Helmet wearing detection and tracking method based on improved YOLOv3
AU2020100705A4 (en) * 2020-05-05 2020-06-18 Chang, Jiaying Miss A helmet detection method with lightweight backbone based on yolov3 network
CN111881730A (en) * 2020-06-16 2020-11-03 北京华电天仁电力控制技术有限公司 Wearing detection method for on-site safety helmet of thermal power plant
CN112287899A (en) * 2020-11-26 2021-01-29 山东捷讯通信技术有限公司 Unmanned aerial vehicle aerial image river drain detection method and system based on YOLO V5
CN112434672A (en) * 2020-12-18 2021-03-02 天津大学 Offshore human body target detection method based on improved YOLOv3
CN112541393A (en) * 2020-11-10 2021-03-23 国网浙江嵊州市供电有限公司 Transformer substation personnel detection method and device based on deep learning
KR20210042275A (en) * 2020-05-27 2021-04-19 베이징 바이두 넷컴 사이언스 앤 테크놀로지 코., 엘티디. A method and a device for detecting small target
CN112949633A (en) * 2021-03-05 2021-06-11 中国科学院光电技术研究所 Improved YOLOv 3-based infrared target detection method
CN113011319A (en) * 2021-03-16 2021-06-22 上海应用技术大学 Multi-scale fire target identification method and system

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103810711A (en) * 2014-03-03 2014-05-21 郑州日兴电子科技有限公司 Keyframe extracting method and system for monitoring system videos
CN106446926A (en) * 2016-07-12 2017-02-22 重庆大学 Transformer station worker helmet wear detection method based on video analysis
WO2018130016A1 (en) * 2017-01-10 2018-07-19 哈尔滨工业大学深圳研究生院 Parking detection method and device based on monitoring video
CN109543542A (en) * 2018-10-24 2019-03-29 杭州叙简科技股份有限公司 A kind of determination method whether particular place personnel dressing standardizes
CN109447168A (en) * 2018-11-05 2019-03-08 江苏德劭信息科技有限公司 A kind of safety cap wearing detection method detected based on depth characteristic and video object
CN109635697A (en) * 2018-12-04 2019-04-16 国网浙江省电力有限公司电力科学研究院 Electric operating personnel safety dressing detection method based on YOLOv3 target detection
CN110070033A (en) * 2019-04-19 2019-07-30 山东大学 Safety cap wearing state detection method in a kind of power domain dangerous work region
CN110852283A (en) * 2019-11-14 2020-02-28 南京工程学院 Helmet wearing detection and tracking method based on improved YOLOv3
AU2020100705A4 (en) * 2020-05-05 2020-06-18 Chang, Jiaying Miss A helmet detection method with lightweight backbone based on yolov3 network
KR20210042275A (en) * 2020-05-27 2021-04-19 베이징 바이두 넷컴 사이언스 앤 테크놀로지 코., 엘티디. A method and a device for detecting small target
CN111881730A (en) * 2020-06-16 2020-11-03 北京华电天仁电力控制技术有限公司 Wearing detection method for on-site safety helmet of thermal power plant
CN112541393A (en) * 2020-11-10 2021-03-23 国网浙江嵊州市供电有限公司 Transformer substation personnel detection method and device based on deep learning
CN112287899A (en) * 2020-11-26 2021-01-29 山东捷讯通信技术有限公司 Unmanned aerial vehicle aerial image river drain detection method and system based on YOLO V5
CN112434672A (en) * 2020-12-18 2021-03-02 天津大学 Offshore human body target detection method based on improved YOLOv3
CN112949633A (en) * 2021-03-05 2021-06-11 中国科学院光电技术研究所 Improved YOLOv 3-based infrared target detection method
CN113011319A (en) * 2021-03-16 2021-06-22 上海应用技术大学 Multi-scale fire target identification method and system

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
ZIJIAN WANG 等: "Fast Personal Protective Equipment Detection for Real Construction Sites Using Deep Learning Approaches", 《SENSORS》 *
ZIJIAN WANG 等: "Fast Personal Protective Equipment Detection for Real Construction Sites Using Deep Learning Approaches", 《SENSORS》, 17 May 2021 (2021-05-17), pages 1 - 22 *
刘喜文等: "基于智能视频图像分析的安全帽识别", 《计算机工程与设计》, vol. 41, no. 5, pages 1464 - 1471 *
周兵等: "一种适合于监控视频内容检索的关键帧提取新方法", 《郑州大学学报(工学版)》 *
周兵等: "一种适合于监控视频内容检索的关键帧提取新方法", 《郑州大学学报(工学版)》, vol. 34, no. 3, 31 May 2013 (2013-05-31), pages 102 - 105 *
等夏的初: "YOLOv5从入门到部署之:训练私有数据并修改模型", 《HTTPS://ZHUANLAN.ZHIHU.COM/P/336871244》 *
等夏的初: "YOLOv5从入门到部署之:训练私有数据并修改模型", 《HTTPS://ZHUANLAN.ZHIHU.COM/P/336871244》, 14 December 2020 (2020-12-14), pages 1 - 10 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114663407A (en) * 2022-03-29 2022-06-24 天地(常州)自动化股份有限公司 Coal gangue target detection method based on improved YOLOv5s model
CN114783000A (en) * 2022-06-15 2022-07-22 成都东方天呈智能科技有限公司 Method and device for detecting dressing standard of worker in bright kitchen range scene
CN114783000B (en) * 2022-06-15 2022-10-18 成都东方天呈智能科技有限公司 Method and device for detecting dressing standard of worker in bright kitchen range scene
CN115311261A (en) * 2022-10-08 2022-11-08 石家庄铁道大学 Method and system for detecting abnormality of cotter pin of suspension device of high-speed railway contact network
CN115330759A (en) * 2022-10-12 2022-11-11 浙江霖研精密科技有限公司 Method and device for calculating distance loss based on Hausdorff distance
CN115330759B (en) * 2022-10-12 2023-03-10 浙江霖研精密科技有限公司 Method and device for calculating distance loss based on Hausdorff distance
WO2024120245A1 (en) * 2022-12-06 2024-06-13 天翼数字生活科技有限公司 Video information summary generation method and apparatus, storage medium, and computer device
CN116681660A (en) * 2023-05-18 2023-09-01 中国长江三峡集团有限公司 Target object defect detection method and device, electronic equipment and storage medium
CN116681660B (en) * 2023-05-18 2024-04-19 中国长江三峡集团有限公司 Target object defect detection method and device, electronic equipment and storage medium
CN117422696A (en) * 2023-11-08 2024-01-19 河北工程大学 Belt wear state detection method based on improved YOLOv8-Efficient Net

Also Published As

Publication number Publication date
CN113553979B (en) 2023-08-08

Similar Documents

Publication Publication Date Title
CN113553979B (en) Safety clothing detection method and system based on improved YOLO V5
CN110070033B (en) Method for detecting wearing state of safety helmet in dangerous working area in power field
CN110502965B (en) Construction safety helmet wearing monitoring method based on computer vision human body posture estimation
CN107134144B (en) A kind of vehicle checking method for traffic monitoring
CN106709436B (en) Track traffic panoramic monitoring-oriented cross-camera suspicious pedestrian target tracking system
CN113553977B (en) Improved YOLO V5-based safety helmet detection method and system
CN104951773A (en) Real-time face recognizing and monitoring system
CN105095866A (en) Rapid behavior identification method and system
CN107256386A (en) Human behavior analysis method based on deep learning
CN113158850B (en) Ship driver fatigue detection method and system based on deep learning
CN112819068B (en) Ship operation violation behavior real-time detection method based on deep learning
CN113516076A (en) Improved lightweight YOLO v4 safety protection detection method based on attention mechanism
CN110457984A (en) Pedestrian's attribute recognition approach under monitoring scene based on ResNet-50
CN112541393A (en) Transformer substation personnel detection method and device based on deep learning
CN110008793A (en) Face identification method, device and equipment
Voulodimos et al. A threefold dataset for activity and workflow recognition in complex industrial environments
CN111753805A (en) Method and device for detecting wearing of safety helmet
CN109657715A (en) A kind of semantic segmentation method, apparatus, equipment and medium
CN115035088A (en) Helmet wearing detection method based on yolov5 and posture estimation
Liang et al. Methods of moving target detection and behavior recognition in intelligent vision monitoring.
CN113128476A (en) Low-power consumption real-time helmet detection method based on computer vision target detection
CN116259002A (en) Human body dangerous behavior analysis method based on video
CN112446417B (en) Spindle-shaped fruit image segmentation method and system based on multilayer superpixel segmentation
CN112487926A (en) Scenic spot feeding behavior identification method based on space-time diagram convolutional network
Wen et al. Improved helmet wearing detection method based on YOLOv3

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant