CN112926405A - Method, system, equipment and storage medium for detecting wearing of safety helmet - Google Patents

Method, system, equipment and storage medium for detecting wearing of safety helmet Download PDF

Info

Publication number
CN112926405A
CN112926405A CN202110137077.3A CN202110137077A CN112926405A CN 112926405 A CN112926405 A CN 112926405A CN 202110137077 A CN202110137077 A CN 202110137077A CN 112926405 A CN112926405 A CN 112926405A
Authority
CN
China
Prior art keywords
safety helmet
training
detection
training set
wearing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110137077.3A
Other languages
Chinese (zh)
Other versions
CN112926405B (en
Inventor
张翔
赵妍祯
叶娜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian University of Architecture and Technology
Original Assignee
Xian University of Architecture and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian University of Architecture and Technology filed Critical Xian University of Architecture and Technology
Priority to CN202110137077.3A priority Critical patent/CN112926405B/en
Publication of CN112926405A publication Critical patent/CN112926405A/en
Application granted granted Critical
Publication of CN112926405B publication Critical patent/CN112926405B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a method, a system, equipment and a storage medium for detecting wearing of a safety helmet, which comprises the following steps of 1, acquiring original image data, and taking part of the original image data as a training set; 2. building a safety helmet detection network YOLOv 4; 3. obtaining the size of a prior box of the training set by using a clustering algorithm, and replacing the prior box data in YOLOv 4; 4. uploading the training set to a safety helmet detection network YOLOv4, and training by adopting a transfer learning method to obtain a safety helmet identification model; 5. and detecting whether the field personnel wear the safety helmet or not by using the safety helmet identification model. The network identification degree is enhanced, the accuracy of multi-behavior small target detection in a complex construction scene and the robustness of a model are improved, and the wearing precision of the safety helmet in the complex scene is realized.

Description

Method, system, equipment and storage medium for detecting wearing of safety helmet
Technical Field
The invention belongs to the field of small target detection, and relates to a method, a system, equipment and a storage medium for detecting wearing of a safety helmet.
Background
The helmet facility wearing target detection is to judge whether the building constructors in the images are qualified to wear the helmets by using a target detection algorithm based on deep learning. In the conventional target detection method, the method is mainly divided into the following steps: the method comprises five steps of image preprocessing, target area selection, feature extraction, feature selection and feature classification. The image features extracted manually are used for target detection, so that the obtained result is not ideal at present, and the target detection technology based on deep learning is quite mature and is applied to various fields.
Deep learning methods in the field of target detection are mainly classified into two categories: a target detection algorithm of two stage; one stage target detection algorithm. The target detection algorithm of Two stage belongs to an image classification algorithm based on candidate regions, a plurality of candidate regions are firstly extracted from an image by using a region search type algorithm, and then the candidate regions are classified by carrying out feature extraction. The target detection algorithm of One stage directly regresses the class probability and the position coordinate value of an object, an RPN network is not used, and the detection speed is higher than that of a Two stage target detection network. The existing YOLO algorithm can well identify clear targets with good background environment, but cannot accurately identify the conditions that the building construction site environment is complex, the actions of building constructors are different, the targets are overlapped, the background color is easy to conflict with the color of the target object, and the like.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a method, a system, equipment and a storage medium for detecting the wearing of a safety helmet, so as to realize accurate detection of multiple small targets in a complex scene.
In order to achieve the purpose, the invention adopts the following technical scheme to realize the purpose:
a method for detecting wearing of a safety helmet comprises the following steps;
acquiring original image data, and taking part of the original image data as a training set;
step two, building a safety helmet detection network YOLOv 4;
step three, obtaining the size of a prior frame of a training set by using a clustering algorithm, and replacing the prior frame data in YOLOv 4;
uploading the training set to a safety helmet detection network YOLOv4, and training by adopting a transfer learning method to obtain a safety helmet identification model;
and step five, detecting whether the field personnel wear the safety helmet or not by using the safety helmet identification model.
Preferably, in the step one, the remaining original image is used as a test set; and step six, after the safety helmet identification model is obtained, inputting a test set, and testing the safety helmet identification model by using the test set to obtain a test result set.
Preferably, in the first step, an xml tag is marked on the training set, and the xml tag is divided into a positive sample and a negative sample.
Preferably, in the second step, the helmet detection network YOLOv4 includes an Input layer, a BackBone network of BackBone, a Neck module and a Head detection Head.
Preferably, in the fourth step, a weight file of the helmet detection network YOLOv4 is trained based on the pre-training weight file by using a transfer learning method, the weight file is converted into a file of.conv.23, and the generated file of.conv.23 is used as a pre-training model for next training.
Further, weight files and images of the variation process of loss and map are generated during training, and the weight files are weight files.
A headgear wear detection system comprising:
the data acquisition module is used for acquiring original image data and taking part of the original image data as a training set;
the safety helmet detection network construction module is used for constructing a safety helmet detection network YOLOv 4;
the prior frame updating module is used for acquiring the size of a prior frame of the training set by using a clustering algorithm and replacing the prior frame data in YOLOv 4;
the training module is used for uploading the training set to a safety helmet detection network YOLOv4, and training by adopting a transfer learning method to obtain a safety helmet identification model;
and the detection module is used for detecting whether the field personnel wear the safety helmet or not by using the safety helmet identification model.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the steps of the headgear wear detection method as claimed in any one of the above when executing the computer program.
A computer-readable storage medium, storing a computer program which, when executed by a processor, carries out the steps of the headgear wear detection method according to any one of the preceding claims.
Compared with the prior art, the invention has the following beneficial effects:
according to the method, the size of the prior frame of the training set is obtained by using a clustering algorithm, the prior frame data in the YOLOv4 is replaced, the prior frame size in the YOLOv4 configuration file is based on the coco data set and is not suitable for the VOC data set used by the method, the small data points can be classified by using a merging clustering algorithm, and the detection rate can be improved when the obtained prior frame is used for small target detection. According to the invention, a better detection data set and a network are obtained based on the method, the identification degree of the network is enhanced, the accuracy of multi-behavior small target detection in a complex construction scene and the robustness of a model are improved, and the accurate detection of the wearing of the safety helmet in the complex scene is realized.
Drawings
FIG. 1 is a flow chart of the present invention;
FIG. 2 is a block diagram of the helmet detection network YOLOv4 of the present invention;
FIG. 3 is a plot of loss and map for the safetyHelmetWearing-Dataset assay;
FIG. 4 is a plot of loss and map detected for a data set according to the present invention.
Detailed Description
The invention is described in further detail below with reference to the accompanying drawings:
the wearing detection method of the safety helmet disclosed by the invention comprises the following steps as shown in figure 1:
step 1: a data set required for training is prepared. The method comprises the steps of collecting original image data on a building construction site, dividing collected photos into a training set and a testing set, wherein the training set accounts for 75% of photos, and the testing set accounts for 25% of photos, and preprocessing source images in the training set.
Specifically, the method comprises the following steps: collecting at least 6000 related safety helmet wearing pictures, wherein the source image data must include multiple angles and multiple postures of constructors during collection, the constructors in the source images are different in distance and size and comprise multiple small targets, the environment is not single, and the field is not too simple; the atlas part used in the invention is derived from a Safety-Helmet-week-Dataset disclosed in GitHub, a photo of a building construction site (shooting place: Saian) shot by a mobile phone and a web crawler, the collected photos are strictly screened, a source photo set is expanded by using an ImageDataGenerator which is provided by Keras and used for pre-processing pictures, a picture generator generates data in a batch period, supports real-time data expansion, infinitely generates data during training, and stops until a set epoch number is reached. In the case of insufficient data sets, ImageDataGenerator () may be used to enlarge the data set and prevent the constructed network from overfitting. Storing the preprocessed pictures into a JPEGImages folder, wherein the naming formats are unified as follows: jpg, as training set, the training set accounts for 75% of all data.
Step 2: and (3) making a VOC data set, labeling the training set by a LabelImg tool, wherein the labels are divided into a positive sample helmet and a negative sample no-helmets, uniformly storing the labeled xml labels in the antagonists files under the VOC folder, and storing the source pictures in the JPEGImages files.
The VOC data set comprises: 1) storing a JPEGImages folder of the safety helmet picture in the first step; 2) marking all target objects in each picture by using a labalImg tool, establishing a box frame for the target objects, labeling the objects in the box frame, and dividing the labeling into positive and negative samples: storing the generated xml file into an options folder, wherein the file name is consistent with the picture name; 3) the ImageSets file stores a Main file, which comprises the following components: text.txt, train.txt, train val.txt and val.txt, which store the index of each picture and guide the picture path of training and testing. So far, the VOC data set is ready, and the requirement of yolo on the data set cannot be satisfied only with the VOC data set of this format, at this time, the corresponding labels folder and the corresponding atlas path file are generated using VOC _ label. Classes categories in the file are changed to helmet and no-helmet.
And step 3: and (4) building a safety helmet detection network YOLOv4, and compiling in a cmake mode. According to fig. 2, YOLOv4 contains the following four parts: an Input layer, a BackBone BackBone network, a Neck and a Head detection Head. The BackBone network of the Backbone comprises CSPDarknet53, Miscap activation function and Dropblock, wherein 72 convolution layers are contained in the Backbone, and a generated Backbone structure comprises 5 CSP modules, wherein the sizes of convolution kernels in front of each CSP module are all 3 x 3, and the step size is 2. The Backbone has 5 CSP modules, and the input image is 608X 608, so the rule of the change of the feature map is: 608- >304- >152- >76- >38- >19, and a characteristic diagram with the size of 19 × 19 is obtained after 5 times of CSP modules.
The specific process of building the safety helmet detection network YOLOv4 is as follows:
1) constructing a backbone network CSPDarknet53 of YOLOv4, as shown in FIG. 2, which comprises: CBM and 5 CSPn residual modules, residual block: CSP1 is composed of 5 CBM and 1 residual module combination, CSP2 is composed of 5 CBM and 2 residual module combination, CSP8 is composed of 5 CBM and 8 residual module combination, CSP8 is composed of 5 CBM and 8 residual module combination, CSP4 is composed of 5 CBM and 4 residual module combination, and there is one down-sampling without passing through one CSPn, wherein each CBM is composed of convolutional layer, BN batch normalization and Mish activation function layer. The Mish activation function is mathematically expressed as:
Mish=x·tanh(ln(1+ex))
where, tanh is also an activation function, and the mathematical expression of tanh is:
Figure BDA0002927381790000061
wherein x represents the weighted summation of nonlinear features, ln () is a logarithmic function with e as the base, and e ^ x is an exponential function.
The Mish activation function allows a relatively small negative gradient inflow, thereby ensuring information flow. The activation function has no boundary, the saturation problem is well avoided, the Mish function ensures that no point is smooth, and the gradient descending effect is better than Relu.
2) And constructing a hack module. The hack module comprises: SPP + PANET. For inputs 608 x 608, the construction of the SPP is 1, 5, 9, 13 and Concat connections, adding the SPP to the CSPDarknet53 significantly increases the receptive field, isolates the most important contextual features, and reduces network operating speed very little. Using PANET instead of FPN as a parameter aggregation method for different detector layers as different bone dry layers, bottom-up path addition is made on the basis of FPN.
3) And constructing a Head detection Head. After 5 times of CSP module (608- >304- >152- >76- >38- >19), a feature map with the size of 19 x 19 is obtained.
And 4, step 4: the xml file in the options folder of the VOC data set is processed using a clustering algorithm to obtain the prior box size of the training set, and the prior box size in YOLOv4 is replaced with the prior box size of the training set.
This clustering algorithm is a greedy algorithm. The idea is similar to the Kruskal algorithm that computes the minimum support tree in the classical graph algorithm. If m data samples need to be aggregated into k classes, each data sample is firstly classified into one class, and then two classes with the shortest distance are merged in each step until the m data samples are aggregated into the k classes. The size of the prior box of the xml file set is obtained using a merge clustering algorithm, updating the data into the yolo file.
And 5: training a helmet detection network YOLOv4 based on a CSPDarknet target recognition model by adopting a transfer learning mode, setting the iteration times to be 4000, and respectively training by changing the size of mini-batch to obtain a trained weights recognition model and loss and map images; the weights identification model is a safety helmet identification model.
An environment for configuring helmet detection network YOLOv4 training, comprising: CUDA10.0, cudnn v7, python3.6, Visual Studio2017, cmake and other tools, and the operating environment: win10 operating system, CPU InTel (R) core (TM) i7-9700F, GPU RTX 2080Ti, video memory: 11G.
And configuring a pre-training weight file yolov4.conv.137 to be under a frame root directory.
Updating relevant parameters in the configuration file cfg, including: detecting path information of a set and a training set, detecting input picture size (width 608 (or 416), hei light 608 (or 416)), mini-batch 1 or 4, max-batches 4000 (2000) batches, taps 3200,3600 (max-batches 0.8, max-batches 0.9), batches 2, filters 21 ((batches +5) 3), anchors, and anchors as prior frame data generated by merging and clustering.
And inputting the constructed data set, and transmitting 4 pictures into the network each time by adopting mosaic data processing.
The weight file of the network is trained based on the pre-training weight file in a transfer learning mode, the weights file is converted into a conv.23 file, and the generated new file is used as a pre-training model for next training, so that the advantages are as follows: the training speed is fast, and the model is excellent.
And generating weight files and the change process images of loss and map during training.
Step 6: inputting a picture set of the safety helmet with a test, testing the prepared test set by using the trained weights recognition model, screening to obtain a better weight file in the weights folder, and obtaining a test result set Output which is about 2000 pictures.
Specifically, the original YOLOv4 framework only supports single-picture testing, if a large number of pictures are to be tested, it is very troublesome to test one picture by one, and in order to be able to detect the pictures in batch and store the pictures in a designated folder, a GetFilename function needs to be added at the beginning of the tester.c, and the make is repeated. Inputting the test set into the network for detection marking, and outputting an Output of the graph set with the label.
And 7: and detecting whether the field personnel wear the safety helmet or not by using the safety helmet identification model.
Fig. 3 is a map and a loss map obtained by training a public data set, and it is obvious that the loss on the data set is more than 5 (generally, the loss is required to be reduced to within 2), and thus the obtained model is obviously low in recognition rate of an object when detection is performed, and relatively poor in model robustness.
FIG. 4 is a map and a loss chart obtained by training a new data set and a model of the invention, the loss is reduced to be within 1, the robustness of the model is improved, and the identification effect on small targets and constructors with complex behaviors during detection is high.
As can be seen from Table 1, after the merged cluster is used in combination with Yolov4, the detection Accuracy (AP) of the weight file obtained by training under different mini-batch and size is obviously improved, loss is reduced by about 4 compared with that obtained on the original data set, and the robustness is better.
TABLE 1 comparison of training results for different mini _ batch
Figure BDA0002927381790000081
Figure BDA0002927381790000091
The above-mentioned contents are only for illustrating the technical idea of the present invention, and the protection scope of the present invention is not limited thereby, and any modification made on the basis of the technical idea of the present invention falls within the protection scope of the claims of the present invention.

Claims (9)

1. A method for detecting wearing of a safety helmet is characterized by comprising the following steps;
acquiring original image data, and taking part of the original image data as a training set;
step two, building a safety helmet detection network YOLOv 4;
step three, obtaining the size of a prior frame of a training set by using a clustering algorithm, and replacing the prior frame data in YOLOv 4;
uploading the training set to a safety helmet detection network YOLOv4, and training by adopting a transfer learning method to obtain a safety helmet identification model;
and step five, detecting whether the field personnel wear the safety helmet or not by using the safety helmet identification model.
2. The method for detecting the wearing of the safety helmet according to claim 1, wherein in the first step, the remaining original images are used as a test set; and step six, after the safety helmet identification model is obtained, inputting a test set, and testing the safety helmet identification model by using the test set to obtain a test result set.
3. The method for detecting wearing of a safety helmet according to claim 1, wherein in the first step, an xml tag is marked on the training set, and the xml tag is divided into a positive sample and a negative sample.
4. The method for detecting wearing of a safety helmet according to claim 1, wherein in the second step, the safety helmet detection network YOLOv4 includes an Input layer, a BackBone network of a Backbone, a Neck module and a Head detection Head.
5. The method for detecting the wearing of the safety helmet as claimed in claim 1, wherein in step four, a weight file of a helmet detection network YOLOv4 is trained based on a pre-training weight file by using a migration learning method, the weight file is converted into a file of.conv.23, and the generated file of.conv.23 is used as a pre-training model for the next training.
6. The method for detecting wearing of a safety helmet according to claim 5, wherein weight files and images of the variation process of loss and map are generated during training, and the weight files are weight files.
7. A headgear wear detection system, comprising:
the data acquisition module is used for acquiring original image data and taking part of the original image data as a training set;
the safety helmet detection network construction module is used for constructing a safety helmet detection network YOLOv 4;
the prior frame updating module is used for acquiring the size of a prior frame of the training set by using a clustering algorithm and replacing the prior frame data in YOLOv 4;
the training module is used for uploading the training set to a safety helmet detection network YOLOv4, and training by adopting a transfer learning method to obtain a safety helmet identification model;
and the detection module is used for detecting whether the field personnel wear the safety helmet or not by using the safety helmet identification model.
8. Computer device comprising a memory, a processor and a computer program stored in said memory and executable on said processor, characterized in that said processor realizes the steps of the method for helmet fit detection according to any of claims 1 to 6 when executing said computer program.
9. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the headgear wear detection method according to any one of claims 1 to 6.
CN202110137077.3A 2021-02-01 2021-02-01 Method, system, equipment and storage medium for detecting wearing of safety helmet Active CN112926405B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110137077.3A CN112926405B (en) 2021-02-01 2021-02-01 Method, system, equipment and storage medium for detecting wearing of safety helmet

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110137077.3A CN112926405B (en) 2021-02-01 2021-02-01 Method, system, equipment and storage medium for detecting wearing of safety helmet

Publications (2)

Publication Number Publication Date
CN112926405A true CN112926405A (en) 2021-06-08
CN112926405B CN112926405B (en) 2024-04-02

Family

ID=76169324

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110137077.3A Active CN112926405B (en) 2021-02-01 2021-02-01 Method, system, equipment and storage medium for detecting wearing of safety helmet

Country Status (1)

Country Link
CN (1) CN112926405B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408394A (en) * 2021-06-11 2021-09-17 通号智慧城市研究设计院有限公司 Safety helmet wearing detection method and system based on deep learning model
CN113408624A (en) * 2021-06-22 2021-09-17 福州大学 Task-oriented image quality evaluation method based on transfer learning
CN113469254A (en) * 2021-07-02 2021-10-01 上海应用技术大学 Target detection method and system based on target detection model
CN113554682A (en) * 2021-08-03 2021-10-26 同济大学 Safety helmet detection method based on target tracking
CN113553977A (en) * 2021-07-30 2021-10-26 国电汉川发电有限公司 Improved YOLO V5-based safety helmet detection method and system
CN113592002A (en) * 2021-08-04 2021-11-02 江苏网进科技股份有限公司 Real-time garbage monitoring method and system
CN113688709A (en) * 2021-08-17 2021-11-23 长江大学 Intelligent detection method, system, terminal and medium for wearing safety helmet
CN113850134A (en) * 2021-08-24 2021-12-28 中国船舶重工集团公司第七0九研究所 Safety helmet wearing detection method and system integrating attention mechanism
CN113936501A (en) * 2021-10-12 2022-01-14 青岛科技大学 Intelligent crossing traffic early warning system based on target detection
CN113989852A (en) * 2021-11-12 2022-01-28 重庆邮电大学 Light YOLOv4 construction site helmet wearing detection method and device
CN114022756A (en) * 2021-09-24 2022-02-08 惠州学院 Visual recognition method for garbage around drainage cover, electronic equipment and storage medium
CN114332773A (en) * 2022-01-05 2022-04-12 苏州麦科斯工程科技有限公司 Intelligent construction site safety helmet wearing identification control system based on Yolo v4 improved model
CN114511899A (en) * 2021-12-30 2022-05-17 武汉光庭信息技术股份有限公司 Street view video fuzzy processing method and system, electronic equipment and storage medium
CN114548223A (en) * 2022-01-18 2022-05-27 南京工程学院 Improved YOLOv4 network structure suitable for small target detection and application thereof
CN114595759A (en) * 2022-03-07 2022-06-07 卡奥斯工业智能研究院(青岛)有限公司 Protective tool identification method and device, electronic equipment and storage medium
CN115131339A (en) * 2022-07-25 2022-09-30 福建省海峡智汇科技有限公司 Factory tooling detection method and system based on neural network target detection

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2020100711A4 (en) * 2020-05-05 2020-06-11 Chang, Cheng Mr The retrieval system of wearing safety helmet based on deep learning
CN111931623A (en) * 2020-07-31 2020-11-13 南京工程学院 Face mask wearing detection method based on deep learning
CN112131983A (en) * 2020-09-11 2020-12-25 桂林理工大学 Helmet wearing detection method based on improved YOLOv3 network

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AU2020100711A4 (en) * 2020-05-05 2020-06-11 Chang, Cheng Mr The retrieval system of wearing safety helmet based on deep learning
CN111931623A (en) * 2020-07-31 2020-11-13 南京工程学院 Face mask wearing detection method based on deep learning
CN112131983A (en) * 2020-09-11 2020-12-25 桂林理工大学 Helmet wearing detection method based on improved YOLOv3 network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
何凯华;: "基于目标检测网络的交通标志识别", 软件工程, no. 10 *
施辉;陈先桥;杨英;: "改进YOLO v3的安全帽佩戴检测方法", 计算机工程与应用, no. 11 *

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113408394A (en) * 2021-06-11 2021-09-17 通号智慧城市研究设计院有限公司 Safety helmet wearing detection method and system based on deep learning model
CN113408624A (en) * 2021-06-22 2021-09-17 福州大学 Task-oriented image quality evaluation method based on transfer learning
CN113469254A (en) * 2021-07-02 2021-10-01 上海应用技术大学 Target detection method and system based on target detection model
CN113469254B (en) * 2021-07-02 2024-04-16 上海应用技术大学 Target detection method and system based on target detection model
CN113553977A (en) * 2021-07-30 2021-10-26 国电汉川发电有限公司 Improved YOLO V5-based safety helmet detection method and system
CN113554682A (en) * 2021-08-03 2021-10-26 同济大学 Safety helmet detection method based on target tracking
CN113592002A (en) * 2021-08-04 2021-11-02 江苏网进科技股份有限公司 Real-time garbage monitoring method and system
CN113688709B (en) * 2021-08-17 2023-12-05 广东海洋大学 Intelligent detection method, system, terminal and medium for wearing safety helmet
CN113688709A (en) * 2021-08-17 2021-11-23 长江大学 Intelligent detection method, system, terminal and medium for wearing safety helmet
CN113850134A (en) * 2021-08-24 2021-12-28 中国船舶重工集团公司第七0九研究所 Safety helmet wearing detection method and system integrating attention mechanism
CN114022756A (en) * 2021-09-24 2022-02-08 惠州学院 Visual recognition method for garbage around drainage cover, electronic equipment and storage medium
CN113936501A (en) * 2021-10-12 2022-01-14 青岛科技大学 Intelligent crossing traffic early warning system based on target detection
CN113989852A (en) * 2021-11-12 2022-01-28 重庆邮电大学 Light YOLOv4 construction site helmet wearing detection method and device
CN113989852B (en) * 2021-11-12 2024-06-04 重庆邮电大学 Construction site safety helmet wearing detection method and device with light weight YOLOv & lt 4 & gt
CN114511899A (en) * 2021-12-30 2022-05-17 武汉光庭信息技术股份有限公司 Street view video fuzzy processing method and system, electronic equipment and storage medium
CN114332773A (en) * 2022-01-05 2022-04-12 苏州麦科斯工程科技有限公司 Intelligent construction site safety helmet wearing identification control system based on Yolo v4 improved model
CN114548223A (en) * 2022-01-18 2022-05-27 南京工程学院 Improved YOLOv4 network structure suitable for small target detection and application thereof
CN114595759A (en) * 2022-03-07 2022-06-07 卡奥斯工业智能研究院(青岛)有限公司 Protective tool identification method and device, electronic equipment and storage medium
CN115131339A (en) * 2022-07-25 2022-09-30 福建省海峡智汇科技有限公司 Factory tooling detection method and system based on neural network target detection

Also Published As

Publication number Publication date
CN112926405B (en) 2024-04-02

Similar Documents

Publication Publication Date Title
CN112926405A (en) Method, system, equipment and storage medium for detecting wearing of safety helmet
CN106650740B (en) A kind of licence plate recognition method and terminal
CN108596046A (en) A kind of cell detection method of counting and system based on deep learning
CN107895160A (en) Human face detection and tracing device and method
CN105574550A (en) Vehicle identification method and device
CN110458107A (en) Method and apparatus for image recognition
CN109117836A (en) Text detection localization method and device under a kind of natural scene based on focal loss function
CN106355188A (en) Image detection method and device
CN108830332A (en) A kind of vision vehicle checking method and system
CN111368690A (en) Deep learning-based video image ship detection method and system under influence of sea waves
CN114495029B (en) Traffic target detection method and system based on improved YOLOv4
CN113129335B (en) Visual tracking algorithm and multi-template updating strategy based on twin network
CN111652835A (en) Method for detecting insulator loss of power transmission line based on deep learning and clustering
CN110688955A (en) Building construction target detection method based on YOLO neural network
CN105989341A (en) Character recognition method and device
CN117115722B (en) Construction scene detection method and device, storage medium and electronic equipment
CN117576038A (en) Fabric flaw detection method and system based on YOLOv8 network
CN114898290A (en) Real-time detection method and system for marine ship
CN105426926B (en) A kind of couple of AMOLED carries out the method and device of detection classification
CN114694130A (en) Method and device for detecting telegraph poles and pole numbers along railway based on deep learning
WO2024063905A1 (en) Few-shot classifier example extraction
CN112183287A (en) People counting method of mobile robot under complex background
Deng et al. Automatic estimation of rice grain number based on a convolutional neural network
CN116385957A (en) X-ray image contraband detection method, device, equipment and medium
Li et al. Research on common tree species recognition by faster r-cnn based on whole tree image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant