CN115410111A - Safety helmet detection method applied to edge equipment and based on structural parameterization - Google Patents
Safety helmet detection method applied to edge equipment and based on structural parameterization Download PDFInfo
- Publication number
- CN115410111A CN115410111A CN202210843903.0A CN202210843903A CN115410111A CN 115410111 A CN115410111 A CN 115410111A CN 202210843903 A CN202210843903 A CN 202210843903A CN 115410111 A CN115410111 A CN 115410111A
- Authority
- CN
- China
- Prior art keywords
- convolution
- basic
- safety helmet
- rep
- parameterization
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Medical Informatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a safety helmet detection method based on structural parameterization, which is applied to edge equipment and comprises the following steps: 1) Constructing a dense heavy parameterization module; 2) Constructing a standard YOLOv3-tiny model and a training data set suitable for safety helmet detection; 3) Reconstructing and training a standard YOLOv3-tiny model; 4) Equivalently converting the trained reconstruction model into a reasoning model, and detecting the safety helmet. Compared with the prior art, the method has the advantages of high instantaneity, high accuracy, strong generalization capability, capability of avoiding gradient dispersion and gradient explosion, reduction of characteristic redundancy, improvement of network learning capability and the like.
Description
Technical Field
The invention relates to the technical field of deep neural network structure parameterization technology and target detection, in particular to a helmet detection method based on structure parameterization, which is applied to edge equipment.
Background
The industries such as engineering, building and the like are typical labor-intensive industries, the working environment is complex, safety accidents are frequent, the death of brain trauma caused by the falling of an object at high altitude is a common typical accident in the building industry, and the safety helmet is used as an effective safety protection device, can block the impact energy of the falling object at high altitude, reduces the head shock injury, and is widely applied to construction sites. In the safety management of a construction site, the real-time and accurate supervision of the wearing behavior of the safety helmet is an important link. The safety management of the construction site requires small volume and strong mobility of the detection equipment, so that embedded edge computing equipment is often used, and the detection algorithm of the safety helmet is deficient in real-time performance and accuracy.
Many works have been studied at home and abroad aiming at the automatic identification technology of the safety helmet, and Dalal et al firstly propose to realize the automatic detection of the safety helmet by extracting the gradient histogram feature; feng Jie and the like are combined with an Adaboost classifier to detect the position of the safety helmet, and whether the safety helmet is worn or not is judged according to the position relation between a person and the safety helmet; hu Tian and the like provide a neural network safety helmet identification model on the basis of emphasizing the application of wavelet transform analysis and deep learning in safety helmet identification; liu Xiaohui and the like position the human face by adopting a skin color detection method, and then realize the identification of the safety helmet by utilizing a Support Vector Machine (SVM); liu Yunbo and the like judge whether a safety helmet is worn or not by detecting the distribution condition of chromatic values of pixel points in the upper 1/3 part of the moving target. Although the method can realize accurate identification of the safety helmet in a specific scene, the method still has the problems of high requirement on environment, poor real-time performance, weak generalization capability, complex user operation process and the like.
In recent years, with the intensive research and application popularization of deep learning technology, a deep network target detection method represented by a YOLOv3 (young Only Look Once V3) model has good real-time performance and high accuracy. However, due to the complex and various convolution structures, network structure fragmentation is caused, the network complexity is increased, the feature redundancy is high, the memory access efficiency is low, and the flexibility is poor, so that the deployment and the application of the Yolov 3-based safety helmet detection algorithm on the edge intelligent device with weak computing power and low memory are seriously hindered.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a safety helmet detection method based on structural parameterization applied to edge equipment.
The purpose of the invention can be realized by the following technical scheme:
a safety helmet detection method based on structural parameterization applied to edge equipment comprises the following steps:
1) Constructing a dense heavy parameterization module;
2) Constructing a standard YOLOv3-tiny model and a training data set which are suitable for safety helmet detection;
3) Reconstructing and training a standard YOLOv3-tiny model;
4) Equivalently converting the trained reconstruction model into a reasoning model, and detecting the safety helmet.
The step 1) specifically comprises the following steps:
11 Define a base unit;
12 Constructing a transformation structure;
13 Building a learning structure;
14 Constructing a dense heavy parameterization module DR-Block: the dense heavy parameterization module DR-Block is formed by cascading 1 transformation structure with 1 learning structure, wherein the parameter of the transformation structure is F trans (x;C rep_in ) The parameter of the learning structure is F learn (x learn ;2×C rep_in ,C rep_out ,K rep ,S rep ) And x is learn =F trans (x;C rep_in ) If the building density re-parameterization module DR-Block is recorded as F rep (x;C rep_in ,C rep_out ,K rep ,S rep )。
In the step 11), the basic unit is composed of 1 convolution layer cascade 1 batch normalization layer, which is marked as F basic (x;C basic_in ,C basic_out ,K basic ,S basic ) Where x is the input, C basic_in To be transportedNumber of input channels, C basic_out Is the number of output channels, K basic As the size of the convolution kernel, S basic Is the step size. Specifically, the number of input channels of the convolutional layer is C basic_in The number of output channels is C basic_out Convolution kernel size of K basic The number of batch normalization layer channels is C basic_out 。
The step 12) is specifically as follows:
firstly, cascading 4 basic units to form a deep cascading network structure to realize over-parameterization of the network, then adding jump connection between any two basic units to realize model integration of different hierarchical characteristics, and finally splicing the output of each basic unit into a whole, which is marked as F trans (x;C trans_in )。
In the step 13), the learning structure is composed of 1 basic unit, and is marked as F learn (x;C learn_in ,C learn_out ,K learn ,S learn )。
The step 2) specifically comprises the following steps:
21 Collecting and labeling a safety helmet detection data set in a construction site and a network picture, and performing standard data preprocessing;
22 Building a standard YOLOv3-tiny model, and setting the number of detection categories as 2, specifically a body without a safety helmet and a body with a safety helmet.
The step 3) specifically comprises the following steps:
31 For all non-1 × 1 convolution layers and cascaded batch normalization layers in the standard YOLOv3-tiny model, replacing a dense re-parameterization module DR-Block, and marking the reconstructed model after replacement as DR-Net; for the standard YOLOv3-tiny model, the number of input channels is C in The number of output channels is C out The convolution kernel is K (K ≠ 1) and the convolution with the step length S is replaced by F rep (x;C in ,C out ,K,S);
32 Adopting a helmet detection data set, and training the reconstructed model DR-Net through a Yolov3-tiny standard training parameter and a training strategy.
The step 4) specifically comprises the following steps:
41 Convert the elementary units into a single convolutional layer, the weight of the convolution in the elementary units is denoted as w conv If the mean value of the batch normalization layer is recorded as mu, the standard deviation is sigma, the scaling coefficient is gamma, and the offset coefficient is beta, the reconstructed single convolution layer F' basic Has a weight ofIs biased to
42 Convolution using skip concatenation is converted into a single convolution layer, converted single convolution layer F' skip_connect Has a weight of w' = concat ([ w ] prev ,w]) Bias is b '= concat ([ b' = b) prev ,b]) Wherein concat represents a cascade operation, w prev Weight of equivalent convolution, b prev The offset of the equivalent convolution, w is the weight of the converted convolution, and b is the offset of the converted convolution;
43 Two concatenated convolutions are converted into a single convolutional layer;
44 Convert the transform structure to a single 1 × 1 convolutional layer, convert all convolutions in the transform structure to a single convolutional layer as per step 41), and then reconstruct all skip connections into a single convolutional layer in sequence as per step 42), finally obtaining a single convolutional layer F 'equivalent to the entire transform structure' trans ;
45 Convert DR-Block to a single convolutional layer;
46 All DR-blocks in the DR-Net are converted into a single convolution layer according to the step 45) so as to obtain an inference phase structure required by deployment;
47 Extracting the field image frames, inputting the field image frames into the model obtained in the step 46) for safety helmet detection, and outputting a detection result.
The step 43) is specifically as follows:
first the weight w of the first convolution 1 Transposing the first and second dimensions of (1), the result being denoted w 1 ' Single convolutional layer F ' after reconstitution ' cascade Has a weight of w' = Conv2d (w) 2 ,w 1 ') offset b' = b 2 +(b 1 ×w 2 ) Where Conv2d denotes a two-dimensional convolution operation, b 1 Offset for the first convolution, w 2 As a weight of the second convolution, b 2 Is the offset of the second convolution.
The step 45) is specifically as follows:
first converting the transformed structure to a single convolutional layer F 'as per step 44)' trans Then, a single winding of laminate F' trans The cascade structure with the learning structure is converted to a single convolutional layer F 'according to step 43' rep ,F′ rep The single convolution layer structure after the DR-Block is re-parameterized is obtained.
Compared with the prior art, the invention has the following advantages:
1. aiming at the defect of insufficient real-time performance of the existing safety helmet detection algorithm based on deep learning, the invention adopts yolov3-tiny network to train and reason, thereby realizing high real-time performance.
2. The YOLOv3-tiny network has shallow depth and low accuracy due to pursuit of real-time performance, the invention adopts a network re-parameterization method to decouple the training stage and the reasoning stage of the network, uses a complex structure to learn a model in the training stage, improves the accuracy of the network, and converts the equivalence of the complex structure into the original YOLOv3-tiny structure in the reasoning stage to restore the real-time performance of the network.
3. Because training data is limited and the generalization capability of the model is insufficient, the invention realizes the over-parameterization of the network and the increase of the model capacity by adding the deep cascade structure in the network re-parameterization structure, thereby generating the implicit regularization effect on the network and improving the generalization capability of the original model.
4. The invention carries out multi-stage distribution adjustment on the feedback gradient through a multi-layer cascaded batch normalization layer, better ensures the effectiveness of gradient feedback and avoids gradient dispersion and gradient explosion to a certain extent.
5. Because deep network parameters are more, a large amount of redundancy exists, the learning capacity of the network is weakened, the expression capacity of a single convolution layer is enhanced mainly by designing multi-branch structures with different scales and complexity in the existing method, but a large amount of feature redundancy is introduced, the performance improvement capacity is influenced, model integration of features of different levels is realized by dense connection, the feature redundancy is greatly reduced, the learning capacity of the network is improved, and the accuracy of an original model is further improved.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a diagram of DR-Block structure and reparameterization transformation.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments.
Examples
As shown in FIG. 1, the invention provides a safety helmet detection method based on structural parameterization applied to edge equipment, which comprises the following steps:
firstly, a dense heavy parameterization module is constructed according to the following steps
S1, defining "basic unit": the parameters of the "basic unit" are: input x, input channel number C basic_in Number of output channels C basic_out Size of convolution kernel K basic Step length of S basic The basic unit comprises 1 convolution layer cascade 1 batch normalization layer, wherein the input channel number of convolution layer is C basic_in The number of output channels is C basic_out Convolution kernel size of K basic (ii) a The number of batch normalization layer channels is C basic_out "basic unit" is denoted as F basic (x;C basic_in ,C basic_out ,K basic ,S basic );
S2, constructing a transformation structure on the basis of the S1: the parameters of the 'transformation structure' are x and the number of input channels C trans_in And for transfusionThe number of the outlet channels is 2 × C trans_in The "transformation structure" includes: firstly, cascading 4 'basic units' to form a deep cascading network structure, wherein the deep cascading network structure is used for realizing network over-parameterization, introducing an implicit regularization effect and increasing the model capacity; wherein, the introduction of a plurality of batch normalization layers enables a reparameterization module to effectively adjust the network feedback gradient so as to avoid gradient extinction and gradient explosion, wherein the parameter of the ith 'basic unit' isThen, jump connection is added between any two basic units for realizing model integration of different levels of features, so that feature redundancy is reduced; finally, the outputs of each basic unit are spliced into a whole, and the structure is recorded as F trans (x;C trans_in );
S3, defining a learning structure on the basis of the S1, and extracting important features and enlarging the receptive field: the parameters of the learning structure are x and the number of input channels C learn_in Number of output channels C learn_out Size of convolution kernel K learn Step length of S learn . The structure includes: 1 "basic unit" with the parameter F learn (x;C learn_in ,C learn_out ,K learn ,S learn );
S4, on the basis of the S2 and the S3, constructing a dense parametrized Block, which is marked as DR-Block, wherein the DR-Block has the parameters as follows: input x and input channel number C rep_in Number of output channels C rep_out Size of convolution kernel K rep Step length of S rep . The DR-Block comprises 1 'transformation structure' and 1 'learning structure' in cascade connection, wherein the parameter of the 'transformation structure' is F trans (x;C rep_in ) The parameter of the learning structure is F learn (x learn ;2×C rep_in ,C rep_out ,K rep ,S rep ) Wherein x is learn =F trans (x;C rep_in ) Let DR-Block be denoted as F rep (x;C rep_in ,C rep_out ,K rep ,S rep )。
Secondly, constructing a YOLOv3-tiny model and a training data set suitable for helmet detection
S5, collecting and marking a safety helmet detection data set in a construction site and a network picture, and performing standard data preprocessing;
s6, building a standard YOLOv3-tiny model, and setting the number of detection types as 2 (two types: a body without a safety helmet and a body with a safety helmet);
thirdly, reconstructing and training a Yolov3-tiny prototype model according to the following steps
S7, given the network model built in the S6, all the non-1 multiplied by 1 convolutional layers and the cascaded batch normalization layers in the original model are replaced by DR-Block, the model after replacement is marked as DR-Net, and the original parameter is C of the number of input channels in The number of output channels is C out The convolution kernel is K (K ≠ 1) and the convolution with the step length S is replaced by F rep (x;C in ,C out ,K,S);
S8, training the reconstructed model in the S7 by using the data set manufactured in the S5 and utilizing a Yolov3-tiny standard training parameter and a training strategy;
finally, equivalently converting the trained reconstruction model into a reasoning model according to the following steps, and detecting
S9, converting the basic unit into a single convolution layer, and recording the weight of convolution in the basic unit as w conv (ii) a The mean value of the batch normalization layer is recorded as mu, the standard deviation is sigma, the scaling coefficient is gamma, and the offset coefficient is beta; then the reconstituted single convolutional layer F' basic With a weight ofIs biased to
And S10, based on S9, converting the convolution using jump connection into a single convolution layer. The input to the structure can be represented as the original input x subjected to an equivalent convolution F prev To obtain, the weight of the equivalent convolution is denoted as w prev Offset is denoted as b prev (ii) a The weight of the convolution in this structure is denoted w and the offset is denoted b. Then the converted single convolutional layer F' skip_connect The weight is w' = concat ([ w = w) prev ,w]) Bias is b '= concat ([ b' = b) prev ,b]) Wherein concat represents a cascade operation;
and S11, converting the two cascaded convolutions into a single convolution layer based on the S10. The weight of the first convolution is denoted as w 1 Offset is denoted by b 1 (ii) a The weight of the second convolution is denoted w 2 Offset is denoted as b 2 . Firstly, w is 1 Transposing the first and second dimensions of (1), the result being denoted w 1 ' Single convolutional layer F ' after reconstitution ' cascade The weight is w' = Conv2d (w) 2 ,w 1 ') offset b' = b 2 +(b 1 ×w 2 ) Where Conv2d represents a two-dimensional convolution operation.
S12: based on S11, the "transform structure" is converted into a single 1 × 1 convolutional layer. Firstly, converting all convolutions in the 'transformation structure' into a single convolution layer according to S9; then, reconstructing all jump connections into a single convolution layer according to S10; finally obtaining a single convolutional layer F ' equivalent to the whole ' transformation structure ' trans 。
S13, based on S12, converting DR-Block into single convolution layer. First, the "transformed structure" is converted into a single convolutional layer F 'according to S12' trans (ii) a Then F' trans A cascade structure of "learning structure" is converted into a single convolutional layer F 'according to S11' rep 。F′ rep The single convolution layer structure after the DR-Block is subjected to reparameterization is obtained;
s14, finally, all DR-blocks in the DR-Net are converted into a single convolution layer according to S13, and therefore an inference stage structure required by deployment is obtained;
and S15, extracting the field image frame, inputting the field image frame into the model obtained in the S14 for safety helmet detection, and outputting a detection result.
Table 1 shows the experimental results of the present invention on the safety helmet detection data set, and the results show that the present invention can improve the accuracy of the safety helmet detection task on the edge device in a plug-and-play manner without changing the network structure of the original method and increasing additional reasoning overhead.
TABLE 1 Effect of the invention on improving the accuracy of the helmet detection algorithm
Index (I) | AP@0.5-0.95 | AP@0.5 |
Original model | 78.1 | 92.1 |
DR-Block | 81.2(↑3.1) | 96.4(↑4.3) |
Because DR-Block is plug-and-play, the invention can be applied to various tasks, and promotes efficient deployment and real-time calculation of a deep network on edge equipment.
In conclusion, aiming at the problem of performance improvement of the safety helmet detection algorithm on the edge equipment, the invention designs the dense linear composite structure to replace the convolution of the safety helmet detection algorithm in the training stage and reconstruct the convolution into the simple convolution of the original model in the reasoning stage, thereby realizing the performance improvement under the condition of ensuring that the reasoning speed is not changed.
According to the method, firstly, a dense network heavy parameterization structure (DR-Block) is constructed, and the structure firstly realizes network over-parameterization through a deep cascade structure and increases the capacity of a model, so that an implicit regularization effect is generated on a network, and the generalization capability of an original model is improved; secondly, the cascade batch normalization layer is utilized to carry out multi-stage distribution adjustment on the feedback gradient, so that the effectiveness of gradient feedback is better ensured, and gradient dispersion and gradient explosion are avoided to a certain extent; finally, model integration of different levels of features is realized through dense connection, feature redundancy is greatly reduced, and learning capability of an original model is improved. Secondly, the yolov3-tiny network with extremely high real-time performance is constructed and is adaptive to the task of safety helmet detection. Then, the training and reasoning structure of the yolov3-tiny network is structured, DR-Block is used for replacing the original network convolution and training in the training stage, and the network performance is improved; and finally, equivalently converting the DR-Block into a simple convolution of the original yolov3-tiny network, thereby realizing performance improvement under the condition of not increasing additional reasoning overhead, and carrying out deployment application aiming at a safety helmet detection task.
Claims (10)
1. A safety helmet detection method based on structural parameterization applied to edge equipment is characterized by comprising the following steps:
1) Constructing a dense heavy parameterization module;
2) Constructing a standard YOLOv3-tiny model and a training data set which are suitable for safety helmet detection;
3) Reconstructing and training a standard YOLOv3-tiny model;
4) Equivalently converting the trained reconstruction model into a reasoning model, and detecting the safety helmet.
2. The method for detecting a safety helmet based on structural parameterization applied to edge equipment according to claim 1, wherein the step 1) specifically comprises the following steps:
11 Define basic units;
12 Constructing a transformation structure;
13 Build a learning structure;
14 Constructing a dense heavy parameterization module DR-Block: the dense heavy parameterization module DR-Block consists of 1The transformation structure is formed by cascading 1 learning structure, and the parameter of the transformation structure is F trans (x;C rep_in ) The parameter of the learning structure is F learn (x learn ;2×C rep_in ,C rep_out ,K rep ,S rep ) And x is learn =F trans (x;C rep_in ) If the building density reparameterization module DR-Block is marked as F rep (x;C rep_in ,C rep_out ,K rep ,S rep )。
3. The method for inspecting safety helmet based on structural parameterization applied to edge device of claim 2, wherein in the step 11), the basic unit is formed by cascading 1 convolution layer and 1 batch normalization layer, which is denoted as F basic (x;C basic_in ,C basic_out ,K basic ,S basic ) Wherein x is the input, C basic_in Is input channel number, C basic_out Is the number of output channels, K basic As the size of the convolution kernel, S basic Is the step length; specifically, the number of input channels of the convolutional layer is C basic_in The number of output channels is C basic_out Convolution kernel size of K basic The number of batch normalization layer action channels is C basic_out 。
4. The method for detecting a safety helmet based on structural parameterization applied to edge equipment according to claim 3, wherein the step 12) is specifically as follows:
firstly, cascading 4 basic units to form a deep cascading network structure to realize over-parameterization of the network, then adding jump connection between any two basic units to realize model integration of different hierarchical features, and finally, splicing the output of each basic unit into a whole, and marking as F trans (x;C trans_in )。
5. The safety helmet detection method based on structural parameterization applied to the edge device according to claim 4,in the step 13), the learning structure is composed of 1 basic unit, and is marked as F learn (x;C learn_in ,C learn_out ,K learn ,S learn )。
6. The method for detecting safety helmet based on structural parameterization applied to edge device according to claim 1, wherein the step 2) specifically comprises the following steps:
21 Collecting and labeling a safety helmet detection data set in a construction site and a network picture, and performing standard data preprocessing;
22 Building a standard YOLOv3-tiny model, and setting the number of detection categories as 2, specifically a body without a safety helmet and a body with a safety helmet.
7. The method for detecting safety helmet based on structural parameterization applied to edge equipment according to claim 6, wherein the step 3) comprises the following steps:
31 Replacing all non-1 multiplied by 1 convolution layers and cascaded batch normalization layers in the standard YOLOv3-tiny model by a dense parameterization module DR-Block, and marking the reconstructed model after replacement as DR-Net; for the standard YOLOv3-tiny model, the number of input channels is C in The number of output channels is C out The convolution kernel is K (K ≠ 1) and the convolution with the step length S is replaced by F rep (x;C in ;C out ,K,S);
32 Adopting a helmet detection data set, and training the reconstructed model DR-Net through a Yolov3-tiny standard training parameter and a training strategy.
8. The method for detecting safety helmet based on structural parameterization applied to edge device according to claim 2, wherein the step 4) comprises the following steps:
41 Convert the elementary units into a single convolutional layer, the weight of the convolution in the elementary units is denoted as w conv The mean of the batch normalization layer is denoted as μ, the standard deviation is σ, the scaling factor is γ, and the deviation isA shift coefficient of beta, the reconstructed single convolutional layer F' basic Has a weight ofIs biased to
42 Conversion of convolution using skip connection into a single convolution layer, converted single convolution layer F' skip_connect Weight of (b) is w' = concat ([ w ] prev ,w]) Bias is b '= concat ([ b' = b) prev ,b]) Wherein concat represents a cascade operation, w preu Weights for equivalent convolution, b prev The offset of the equivalent convolution, w is the weight of the converted convolution, and b is the offset of the converted convolution;
43 Two concatenated convolutions are converted into a single convolutional layer;
44 Convert the transform structure to a single 1 × 1 convolutional layer, convert all convolutions in the transform structure to a single convolutional layer as per step 41), and then reconstruct all skip connections into a single convolutional layer in sequence as per step 42), finally obtaining a single convolutional layer F 'equivalent to the entire transform structure' trans ;
45 Convert DR-Block to a single convolutional layer;
46 All DR-blocks in the DR-Net are converted into a single convolution layer according to the step 45) so as to obtain an inference phase structure required by deployment;
47 Extracting the field image frames, inputting the field image frames into the model obtained in the step 46) for safety helmet detection, and outputting a detection result.
9. The method for detecting a safety helmet based on structural parameterization applied to edge equipment according to claim 8, wherein the step 43) is specifically as follows:
first the weight w of the first convolution 1 Transposing the first and second dimensions of (1), the result being denoted w 1 'then reconstructed single convolution layer F' cascade Is w' = in weightConv2d(w 2 ,w 1 ') offset b' = b 2 +(b 1 ×w 2 ) Where Conv2d denotes a two-dimensional convolution operation, b 1 Offset for the first convolution, w 2 As a weight of the second convolution, b 2 Is the offset of the second convolution.
10. The method for detecting safety helmet based on structural parameterization applied to edge device according to claim 8, wherein the step 45) is specifically as follows:
first converting the transformed structure to a single convolutional layer F 'as per step 44)' trans Then, a single winding of laminate F' trans The cascade structure with the learning structure is converted to a single convolutional layer F 'according to step 43' rep ,F′ rep The single convolution layer structure after the DR-Block reparameterization is obtained.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210843903.0A CN115410111A (en) | 2022-07-18 | 2022-07-18 | Safety helmet detection method applied to edge equipment and based on structural parameterization |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210843903.0A CN115410111A (en) | 2022-07-18 | 2022-07-18 | Safety helmet detection method applied to edge equipment and based on structural parameterization |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115410111A true CN115410111A (en) | 2022-11-29 |
Family
ID=84158328
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210843903.0A Pending CN115410111A (en) | 2022-07-18 | 2022-07-18 | Safety helmet detection method applied to edge equipment and based on structural parameterization |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115410111A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116206188A (en) * | 2023-05-04 | 2023-06-02 | 浪潮电子信息产业股份有限公司 | Image recognition method, system, equipment and storage medium |
CN117789153A (en) * | 2024-02-26 | 2024-03-29 | 浙江驿公里智能科技有限公司 | Automobile oil tank outer cover positioning system and method based on computer vision |
-
2022
- 2022-07-18 CN CN202210843903.0A patent/CN115410111A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116206188A (en) * | 2023-05-04 | 2023-06-02 | 浪潮电子信息产业股份有限公司 | Image recognition method, system, equipment and storage medium |
CN117789153A (en) * | 2024-02-26 | 2024-03-29 | 浙江驿公里智能科技有限公司 | Automobile oil tank outer cover positioning system and method based on computer vision |
CN117789153B (en) * | 2024-02-26 | 2024-05-03 | 浙江驿公里智能科技有限公司 | Automobile oil tank outer cover positioning system and method based on computer vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115410111A (en) | Safety helmet detection method applied to edge equipment and based on structural parameterization | |
CN112200090B (en) | Hyperspectral image classification method based on cross-grouping space-spectral feature enhancement network | |
CN113239784B (en) | Pedestrian re-identification system and method based on space sequence feature learning | |
US20210182615A1 (en) | Alexnet-based insulator self-explosion recognition method | |
CN112818764A (en) | Low-resolution image facial expression recognition method based on feature reconstruction model | |
CN110726898B (en) | Power distribution network fault type identification method | |
CN115953408B (en) | YOLOv 7-based lightning arrester surface defect detection method | |
CN115512226B (en) | LiDAR point cloud filtering method integrated with attention mechanism multi-scale CNN | |
Liu et al. | RB-Net: Training highly accurate and efficient binary neural networks with reshaped point-wise convolution and balanced activation | |
CN112860904B (en) | External knowledge-integrated biomedical relation extraction method | |
CN113505856A (en) | Hyperspectral image unsupervised self-adaptive classification method | |
DE102021118734A1 (en) | Image identification device, method for performing semantic segregation and program | |
CN114898227A (en) | Cloud picture segmentation method | |
Fan et al. | HFPQ: deep neural network compression by hardware-friendly pruning-quantization | |
CN115221916A (en) | CNN rolling bearing fault diagnosis method based on improved GAF and SA | |
CN105224943A (en) | Based on the image swift nature method for expressing of multi thread normalization non-negative sparse coding device | |
CN112615843B (en) | Power Internet of things network security situation assessment method based on multi-channel SAE-AdaBoost | |
CN117830900A (en) | Unsupervised video object segmentation method | |
CN117113243B (en) | Photovoltaic equipment abnormality detection method | |
CN117744745A (en) | Image optimization method and optimization system based on YOLOv5 network model | |
CN116385950A (en) | Electric power line hidden danger target detection method under small sample condition | |
CN116310967A (en) | Chemical plant safety helmet wearing detection method based on improved YOLOv5 | |
CN113744220B (en) | PYNQ-based detection system without preselection frame | |
CN114627455A (en) | Power transmission line missing bolt weak supervision detection method | |
CN117437557A (en) | Hyperspectral image classification method based on double-channel feature enhancement |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |