CN114067365B - Helmet wearing detection method and system based on central attention network - Google Patents

Helmet wearing detection method and system based on central attention network Download PDF

Info

Publication number
CN114067365B
CN114067365B CN202111397722.1A CN202111397722A CN114067365B CN 114067365 B CN114067365 B CN 114067365B CN 202111397722 A CN202111397722 A CN 202111397722A CN 114067365 B CN114067365 B CN 114067365B
Authority
CN
China
Prior art keywords
corner
centripetal
upper left
coordinate set
lower right
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111397722.1A
Other languages
Chinese (zh)
Other versions
CN114067365A (en
Inventor
蔡念
刘至键
陈妍帆
陈煜�
张承滨
王晗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202111397722.1A priority Critical patent/CN114067365B/en
Publication of CN114067365A publication Critical patent/CN114067365A/en
Application granted granted Critical
Publication of CN114067365B publication Critical patent/CN114067365B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a helmet wearing detection method and system based on a central attention network, and relates to the technical field of intelligent analysis of video monitoring. The invention uses the center to pay attention to the centripetal network to acquire the image for carrying out the wearing detection of the safety helmet, and the center to pay attention to the centripetal network automatically learn the characteristics without manually designing the characteristics, so that the universality is good; the central attention centripetal network adopts an angular point prediction module to respectively predict the positions of an upper left angular point and a lower right angular point, a detection frame of a target is obtained by the combination of the upper left angular point and the lower right angular point, the angular point prediction module carries out offset prediction, angular point position prediction and centripetal vector prediction, thereby obtaining more accurate angular point positions, obtaining higher detection precision, adding a boundary constraint central attention module in the training process, adding a vertical-horizontal angular point pooling layer in the angular point prediction process, and further improving the prediction accuracy.

Description

Helmet wearing detection method and system based on central attention network
Technical Field
The invention relates to the technical field of video monitoring intelligent analysis, in particular to a helmet wearing detection method and system based on a central attention network.
Background
The building industry is one of the pillar industries in China, however, due to the factors of long production period, multiple outdoor high-place operations, complex construction process and the like, the building industry is also one of the most serious industries for producing safety accidents in China, and in the building industry, high-place falling accidents and object striking accidents account for the main part of the total accidents, and the head injury is particularly serious as compared with other parts of the body. The head deadly injury can be greatly reduced by correctly using the safety helmet, however, in actual working, many workers often carelessly wear the safety helmet without standardization. Therefore, the wearing condition of the safety helmet of the worker is monitored through an automatic technology, and early warning is sent out in time, so that the safety helmet has great significance for protecting the life safety of the worker and reducing the accident rate of the building industry. Meanwhile, by identifying the color of the safety helmet, constructors with different identities and posts can be identified, so that management of constructors at construction sites is enhanced.
Early methods of headgear wear detection were based primarily on sensors or traditional vision techniques. The detection method based on the sensor is focused on using remote positioning and tracking technologies such as RFID and wireless local area network, and is used for carrying out real-time positioning and tracking by embedding an electronic tag and a sensor on the safety helmet, and meanwhile, whether the safety helmet is in a worn state is confirmed by means of physical detection means such as pressure, infrared wave beams, heat and the like. The method has low efficiency and high cost, and because thousands of safety helmets are required to be embedded with electronic tags and sensors, a fixed data base station is established at the same time, higher installation and maintenance cost is difficult to avoid, the application mode has a plurality of inconveniences, and the method is unfavorable for actual large-scale popularization and use.
Compared with the method based on vision, the method combines the video monitoring and computer vision detection technology, can realize the implementation automatic monitoring of the construction site by only configuring a certain number of cameras to cover the construction range, timely detects workers who do not wear the safety helmet correctly according to the monitoring image and gives an alarm, and has lower cost and flexible application mode. However, the early methods are mainly based on the traditional image processing technology, and need to extract image features, such as LBP, hu invariant moment, color histogram, HOG, etc., by combining with the traditional machine learning method to obtain detection results, and have the defects of slow speed, complex design and poor generalization, the artificially designed feature extraction algorithm often cannot obtain abundant high-level visual features, is difficult to adapt to complex and changeable detection environments, and is in face of the change of the detection environments such as weather, illumination, construction background, shielding object, figure pose, etc., and the detection performance is greatly affected.
Therefore, in recent years, a deep learning-based method has become a mainstream direction of research on a helmet wear detection problem. The safety helmet wearing detection method based on deep learning takes data as a drive, automatically learns characteristics from the existing data, and can solve the problem that the manual design characteristics cannot adapt to actual detection scenes. However, due to the complexity of the detection environment, although the universality is greatly improved compared with the traditional visual technology, the safety helmet wearing detection method based on deep learning still faces a plurality of challenges, and some existing schemes are easy to influence when facing factors such as small target detection, night scenes, dense crowds, similar objects and the like in practical application, and limit the detection precision, so that a novel target detection model is researched to construct a real-time safety helmet wearing detection method with high precision and high robustness, and the method has very strong practical significance.
Publication No.: CN112949354A, a method and a device for detecting the wearing of the safety helmet, electronic equipment and a computer readable storage medium with the publication date 2021-06-11, are easy to influence when facing factors such as small target detection, night scenes, dense crowds, similar articles and the like in practical application, and have lower detection precision.
Disclosure of Invention
The invention provides a method and a system for detecting the wearing of a safety helmet based on a central attention network, which have high precision and high robustness.
The technical scheme of the invention is as follows:
a helmet wear detection method based on a central attention network, the method comprising the steps of:
S1, normalizing the size of an image to be detected;
S2, extracting features of an image to be detected to obtain a feature image, and performing offset prediction, angular point position prediction and centripetal vector prediction on the feature image by an angular point prediction module to obtain an offset heat image, an angular point position heat image and a centripetal vector heat image;
S3, carrying out local maximum screening and TopK screening treatment on the diagonal point position heat map, filtering redundant corner point detection results to obtain an upper left corner point coordinate set and a lower right corner point coordinate set, and the category and the confidence of each corner point in each coordinate set, wherein the corner point category comprises: head wearing safety helmet and head not wearing safety helmet;
s4, correcting the angular point position by using the offset heat map to obtain a corrected upper left angular point coordinate set and a corrected lower right angular point coordinate set;
S5, constructing candidate detection frames through the corrected upper left corner coordinate set and lower right corner coordinate set, and calculating centripetal areas of the candidate detection frames;
S6, carrying out post-processing on the candidate detection frames to obtain final detection frames, and judging the wearing condition of the safety helmet according to the types of the detection frames.
The technical scheme provides a safety helmet wearing detection method and a safety helmet wearing detection system based on a central attention centripetal network, which are characterized in that images are acquired by the central attention centripetal network to carry out safety helmet wearing detection, the central attention centripetal network automatically learns characteristics without manual design characteristics, and the universality is good; the central attention centripetal network adopts an angular point prediction module to respectively predict the positions of the upper left angular point and the lower right angular point, a detection frame of a target is obtained by the combination of the upper left angular point and the lower right angular point, and the angular point prediction module carries out offset prediction, angular point position prediction and centripetal vector prediction, so that more accurate angular point positions are obtained, and higher detection precision is obtained.
Further, the corner categories of the head-worn safety helmet include: the head wears red safety helmet, the head wears blue safety helmet, the head wears white safety helmet, and the head wears yellow safety helmet.
Further, the feature extraction in step S2 is implemented through a feature extraction network, where the feature extraction network is provided with a downsampling rate, and the correction of the diagonal point location in step S4 includes the steps of:
s41, taking x and y coordinates of each corner in the upper left corner coordinate set tl and the lower right corner coordinate set br as indexes, and retrieving an offset heat map to obtain the offset of each corner;
S42, multiplying each corner coordinate by the downsampling rate of the feature extraction network to map the corner position in the offset heat map back to the corresponding position of the input image, and adding the coordinates and the offset to correct the precision loss in the downsampling process, so as to obtain a corrected upper left corner coordinate set tl and lower right corner coordinate set br.
Further, the method for constructing the candidate detection frame in step S5 is as follows:
S51, acquiring a corrected upper left corner coordinate set tl and a corrected lower right corner coordinate set br, taking x and y coordinates of each corner in the upper left corner coordinate set tl and the corrected lower right corner coordinate set br as indexes, searching a centripetal vector heat map to acquire a centripetal vector of a current corner, adding each upper left corner coordinate in the upper left corner coordinate set tl with a corresponding centripetal vector, thereby acquiring an upper left corner target center coordinate (tl ctx,tlcty) corresponding to each upper left corner, and similarly calculating to obtain a lower right corner target center coordinate (br ctx,brcty) of each lower right corner;
S52, combining the left upper corner coordinates in the left upper corner coordinate set tl with all the right lower corner coordinates in the right lower corner coordinate set br one by one, performing exhaustive pairing to obtain a pairing matrix as a candidate detection frame, wherein a pairing matrix corresponding to a pairing result of the ith left upper corner in the left upper corner coordinate set tl and the jth right lower corner in the right lower corner coordinate set br is bboxij=(tlx i,tly i,brx j,bry j),, and storing target centers (tl ctx i,tlcty i) and (br ctx j,brcty j) corresponding to each bbox ij for pairing screening, and calculating target confidence of the target centers:
scoreij=(tl_scorei+br_scorej)/2,
where tl_score i is the confidence score of the i-th upper left corner and br_score j is the confidence score of the j-th lower right corner.
Further, the method for calculating the centripetal area in step S5 is as follows: for all candidate detection boxes, a centripetal region Rcentral ij=(ctlx ij,ctly ij,cbrx ij,cbry ij), is defined in which ctl x ij,ctly ij,cbrx ij,cbry ij is calculated by the following formula:
Where μ is a hyperparameter, ctlx ij is the upper left corner abscissa of the centripetal region, ctly ij is the upper left corner ordinate of the centripetal region, cbrx ij is the lower right corner abscissa of the centripetal region, and cbry ij is the lower right corner ordinate of the centripetal region.
Further, the post-processing in step S6 includes filtering candidate detection frames, where the filtering includes removing all the impossible detection frames according to filtering conditions, where the filtering conditions include:
if the categories of the two corner points are inconsistent, the detection frame is removed, and the judgment formula is as follows:
tl_clsesi≠br_clsesj
Wherein tl_ clses i is the category of the upper left corner of the detection frame, br_ clses j is the category of the lower right corner of the detection frame;
If the upper left corner point is not positioned at the upper left of the lower right corner point, the detection frame is removed, and the judgment formula is as follows:
tlxi>brxj|tlyi>bryj
wherein (tlx i,tlyi) is the coordinates of the upper left corner point of the detection frame and (brx j,bryj) is the coordinates of the lower right corner point of the detection frame;
if the predicted target center position is not in the centripetal region, the detection frame is removed, and the judgment formula is as follows:
Wherein the method comprises the steps of All are target center coordinates, (ctl x ij,ctly ij,cbrx ij,cbry ij) are obtained from the centripetal region.
Further, after screening and filtering the candidate detection frames, removing detection frames with confidence in the overlapped detection frames not meeting preset conditions by adopting a Soft-NMS algorithm.
A headgear wear detection system, the headgear wear detection system comprising: the device comprises a normalization module, a feature extraction network, a corner prediction module, a corner screening module, a corner position correction module, a detection frame construction module and a post-processing module;
the normalization module normalizes the size of the image to be detected; the feature extraction network performs feature extraction on the image to be detected to obtain a feature image, and the angular point prediction module performs offset prediction, angular point position prediction and centripetal vector prediction on the feature image to obtain an offset heat image, an angular point position heat image and a centripetal vector heat image; the corner screening module performs local maximum screening and TopK screening treatment on the corner position heat map, filters redundant corner detection results to obtain an upper left corner coordinate set and a lower right corner coordinate set, and the category and the confidence of each corner in each coordinate set, wherein the corner category comprises: head wearing safety helmet and head not wearing safety helmet; the corner position correction module corrects the corner positions by using the offset heat map to obtain a corrected upper left corner coordinate set and a corrected lower right corner coordinate set; the detection frame construction module constructs candidate detection frames through the corrected upper left corner coordinate set and lower right corner coordinate set, and calculates centripetal areas of the candidate detection frames; and the post-processing module carries out post-processing on the detection frame to obtain a final detection frame, and judges the wearing condition of the safety helmet according to the type of the detection frame.
Further, in the training process of the helmet wearing detection system, the helmet wearing detection system further comprises a boundary constraint center attention module, and the boundary constraint center attention module comprises: the system comprises a central pooling layer, an offset prediction module, a central point position prediction module and a boundary constraint vector prediction module;
The central pooling layer acquires a feature map output by a feature extraction network, the central pooling layer enables a boundary constraint central attention module to acquire the most discriminant internal feature information, the feature extraction network is forced to learn the capability of extracting critical internal information, a target to be detected is respectively transmitted to an offset prediction module, a central point position prediction module and a boundary constraint vector prediction module through the central pooling layer, the central point position prediction module predicts a central point coordinate and a corresponding confidence score, the offset prediction module predicts an offset to enable the position to be more accurate, the boundary constraint vector prediction module predicts a group of boundary constraint vectors to represent boundary size constraints of the target, so that the feature extraction network is forced to capture the scale information of the target, and the boundary constraint (l h,lw) is expressed as:
where s is the downsampling rate, (tl x,tly) is the coordinates of the upper left corner and (br x,bry) is the coordinates of the lower right corner.
Further, the corner prediction module includes: a vertical-horizontal corner pooling layer, an offset prediction module, a corner position prediction module and a centripetal vector prediction module;
The vertical-horizontal corner pooling layer acquires a feature image output by a feature extraction network, and the vertical-horizontal corner pooling layer respectively extracts features of an input feature image in a target, a target horizontal edge and a corner position by using a convolution block consisting of a 3x3 convolution layer, a BN layer and a ReLU activation function layer; focusing the maximum value of the internal features of the target on the horizontal edge position by means of vertical pooling operation, adding the maximum value with the feature value of the position, and then focusing the new feature maximum value on the corner position by means of horizontal pooling operation, and adding the new feature maximum value with the local feature of the corner position; and the characteristics output by the vertical-horizontal corner pooling layer respectively obtain an offset heat map, a corner position heat map and a centripetal vector heat map through an offset prediction module, a corner position prediction module and a centripetal vector prediction module.
Compared with the prior art, the technical scheme of the invention has the beneficial effects that: the center attention centripetal network is utilized to acquire images for carrying out the wearing detection of the safety helmet, the center attention centripetal network automatically learns the characteristics without manual design characteristics, and the universality is good; the central attention centripetal network adopts an angular point prediction module to respectively predict the positions of an upper left angular point and a lower right angular point, a detection frame of a target is obtained by the combination of the upper left angular point and the lower right angular point, the angular point prediction module carries out offset prediction, angular point position prediction and centripetal vector prediction, thereby obtaining more accurate angular point positions, obtaining higher detection precision, adding a boundary constraint central attention module in the training process, adding a vertical-horizontal angular point pooling layer in the angular point prediction process, and further improving the prediction accuracy.
Drawings
FIG. 1 is a schematic diagram of a central attention centripetal network structure;
FIG. 2 is a schematic diagram of a network structure of a corner prediction module;
FIG. 3 is a schematic diagram of a boundary constraint center attention module network architecture;
fig. 4 is a schematic diagram of a network structure of a cross star deformable convolution module.
Detailed Description
For the purpose of clearly illustrating the invention, a method and a system for detecting the wearing of a helmet based on a central attention network are provided, and the invention is further described with reference to the embodiments and the drawings, but the scope of protection of the invention should not be limited thereby.
Example 1
A method for detecting the wearing of a helmet based on a central attention network comprises the following steps:
S1, normalizing the size of an image to be detected;
S2, extracting features of an image to be detected to obtain a feature image, and performing offset prediction, angular point position prediction and centripetal vector prediction on the feature image by an angular point prediction module to obtain an offset heat image, an angular point position heat image and a centripetal vector heat image;
S3, carrying out local maximum screening and TopK screening treatment on the diagonal point position heat map, filtering redundant corner point detection results to obtain an upper left corner point coordinate set and a lower right corner point coordinate set, and the category and the confidence of each corner point in each coordinate set, wherein the corner point category comprises: head wearing safety helmet and head not wearing safety helmet;
s4, correcting the angular point position by using the offset heat map to obtain a corrected upper left angular point coordinate set and a corrected lower right angular point coordinate set;
S5, constructing candidate detection frames through the corrected upper left corner coordinate set and lower right corner coordinate set, and calculating centripetal areas of the candidate detection frames;
S6, carrying out post-processing on the candidate detection frames to obtain final detection frames, and judging the wearing condition of the safety helmet according to the types of the detection frames.
The embodiment discloses a safety helmet wearing detection method based on a central attention centripetal network, which utilizes the central attention centripetal network to acquire images for safety helmet wearing detection, and the central attention centripetal network automatically learns characteristics without manual design characteristics, so that the safety helmet wearing detection method has good universality; the central attention centripetal network adopts an angular point prediction module to respectively predict the positions of the upper left angular point and the lower right angular point, a detection frame of a target is obtained by the combination of the upper left angular point and the lower right angular point, and the angular point prediction module carries out offset prediction, angular point position prediction and centripetal vector prediction, so that more accurate angular point positions are obtained, and higher detection precision is obtained.
Example 2
A method for detecting the wearing of a helmet based on a central attention network comprises the following steps:
S1, normalizing the size of an image to be detected;
S2, extracting features of an image to be detected to obtain a feature image, and performing offset prediction, angular point position prediction and centripetal vector prediction on the feature image by an angular point prediction module to obtain an offset heat image, an angular point position heat image and a centripetal vector heat image; the feature extraction is realized through a feature extraction network, and the feature extraction network is provided with a downsampling rate;
S3, carrying out local maximum screening and TopK screening treatment on the diagonal point position heat map, filtering redundant corner point detection results to obtain an upper left corner point coordinate set and a lower right corner point coordinate set, and the category and the confidence of each corner point in each coordinate set, wherein the corner point category comprises: head wearing safety helmet and head not wearing safety helmet;
The corner categories of the head-worn safety helmet comprise: wearing a red safety helmet on the head, wearing a blue safety helmet on the head, wearing a white safety helmet on the head, and wearing a yellow safety helmet on the head;
s4, correcting the angular point position by using the offset heat map to obtain a corrected upper left angular point coordinate set and a corrected lower right angular point coordinate set;
the correcting the diagonal position comprises the following steps:
s41, taking x and y coordinates of each corner in the upper left corner coordinate set tl and the lower right corner coordinate set br as indexes, and retrieving an offset heat map to obtain the offset of each corner;
S42, multiplying each corner coordinate by the downsampling rate of the feature extraction network to map the corner position in the offset heat map back to the corresponding position of the input image, and adding the coordinates and the offset to correct the precision loss in the downsampling process, so as to obtain a corrected upper left corner coordinate set tl and lower right corner coordinate set br.
S5, constructing candidate detection frames through the upper left corner coordinate set and the lower right corner coordinate set, and calculating centripetal areas of the candidate detection frames;
the method for constructing the candidate detection frame comprises the following steps:
S51, acquiring a corrected upper left corner coordinate set tl and a corrected lower right corner coordinate set br, taking x and y coordinates of each corner in the upper left corner coordinate set tl and the corrected lower right corner coordinate set br as indexes, searching a centripetal vector heat map to acquire a centripetal vector of a current corner, adding each upper left corner coordinate in the upper left corner coordinate set tl with a corresponding centripetal vector, thereby acquiring an upper left corner target center coordinate (tl ctx,tlcty) corresponding to each upper left corner, and similarly calculating to obtain a lower right corner target center coordinate (br ctx,brcty) of each lower right corner;
S52, combining the left upper corner coordinates in the left upper corner coordinate set tl with all the right lower corner coordinates in the right lower corner coordinate set br one by one, performing exhaustive pairing to obtain a pairing matrix as a candidate detection frame, wherein a pairing matrix corresponding to a pairing result of the ith left upper corner in the left upper corner coordinate set tl and the jth right lower corner in the right lower corner coordinate set br is bboxij=(tlx i,tly i,brx j,bry j),, and storing target centers (tl ctx i,tlcty i) and (br ctx j,brcty j) corresponding to each bbox ij for pairing screening, and calculating target confidence of the target centers:
scoreij=(tl_scorei+br_scorej)/2,
where tl_score i is the confidence score of the i-th upper left corner and br_score j is the confidence score of the j-th lower right corner.
The method for calculating the centripetal region comprises the following steps: for all candidate detection boxes, a centripetal region Rcentral ij=(ctlx ij,ctly ij,cbrx ij,cbry ij), is defined in which ctl x ij,ctly ij,cbrx ij,cbry ij is calculated by the following formula:
Where μ is a hyperparameter, ctlx ij is the upper left corner abscissa of the centripetal region, ctly ij is the upper left corner ordinate of the centripetal region, cbrx ij is the lower right corner abscissa of the centripetal region, and cbry ij is the lower right corner ordinate of the centripetal region.
For detection frames with areas greater than 3500, μ=1/2.1, otherwise μ=1/2.4 is taken.
S6, carrying out post-processing on the candidate detection frames to obtain final detection frames, and judging the wearing condition of the safety helmet according to the types of the detection frames;
the post-processing comprises screening and filtering candidate detection frames, wherein the screening and filtering is to remove all impossible detection frames according to screening conditions, and the screening conditions comprise:
if the categories of the two corner points are inconsistent, the detection frame is removed, and the judgment formula is as follows:
tl_clsesi≠br_clsesj
Wherein tl_ clses i is the category of the upper left corner of the detection frame, br_ clses j is the category of the lower right corner of the detection frame;
If the upper left corner point is not positioned at the upper left of the lower right corner point, the detection frame is removed, and the judgment formula is as follows:
tlxi>brxj|tlyi>bryj
wherein (tlx i,tlyi) is the coordinates of the upper left corner point of the detection frame and (brx j,bryj) is the coordinates of the lower right corner point of the detection frame;
if the predicted target center position is not in the centripetal region, the detection frame is removed, and the judgment formula is as follows:
Wherein the method comprises the steps of All are target center coordinates, (ctl x ij,ctly ij,cbrx ij,cbry ij) are obtained from the centripetal region.
And after screening and filtering the candidate detection frames, removing the detection frames with low confidence in the overlapped detection frames by adopting a Soft-NMS algorithm, and setting a confidence threshold of the Soft-NMS algorithm according to actual needs.
The safety helmet wearing detection system according to this embodiment is constructed based on a central attention centripetal network, and includes: the device comprises a normalization module, a feature extraction network, a corner prediction module, a corner screening module, a corner position correction module, a detection frame construction module and a post-processing module;
the normalization module normalizes the size of the image to be detected; the feature extraction network performs feature extraction on the image to be detected to obtain a feature image, and the angular point prediction module performs offset prediction, angular point position prediction and centripetal vector prediction on the feature image to obtain an offset heat image, an angular point position heat image and a centripetal vector heat image; the corner screening module performs local maximum screening and TopK screening treatment on the corner position heat map, filters redundant corner detection results to obtain an upper left corner coordinate set and a lower right corner coordinate set, and the category and the confidence of each corner in each coordinate set, wherein the corner category comprises: head wearing safety helmet and head not wearing safety helmet; the corner position correction module corrects the corner positions by using the offset heat map to obtain a corrected upper left corner coordinate set and a corrected lower right corner coordinate set; the detection frame construction module constructs candidate detection frames through the corrected upper left corner coordinate set and lower right corner coordinate set, and calculates centripetal areas of the candidate detection frames; and the post-processing module carries out post-processing on the detection frame to obtain a final detection frame, and judges the wearing condition of the safety helmet according to the type of the detection frame.
Example 3
The embodiment discloses a helmet wear detection system based on central attention centripetal network, as shown in fig. 1, the helmet wear detection system includes: the device comprises a feature extraction network, a corner prediction module and a post-processing module;
The feature extraction network acquires an image to be detected, and performs feature extraction to obtain a feature image; the corner prediction module predicts coordinates of an upper left corner and a lower right corner, performs corner pairing, and the paired corners form a rectangular detection frame; the post-processing module removes all impossible detection frames according to screening conditions, then removes detection frames with low confidence in the overlapped detection frames through a Soft-NMS algorithm to obtain detection results, and sets a confidence threshold of the Soft-NMS algorithm according to actual requirements; judging the wearing condition of the safety helmet by utilizing whether a detection frame for the class of the head unworn safety helmet exists in the detection result.
In the detection implementation process, firstly, frame extraction processing is performed on an input video, and the image frame size is adjusted to be uniform resolution, so as to obtain an image to be detected, wherein the uniform resolution is 512×512 in the embodiment.
And the image to be detected is extracted and integrated with the image features through a light feature extraction network in the feature extraction stage, so that an image feature map is obtained. In this embodiment, DLAnet is used as a feature extraction network to ensure light weight and detection instantaneity of the network module.
And in the corner prediction stage, respectively predicting the upper left corner and the lower right corner of the target and the centripetal vector for corner pairing by using the upper left corner prediction module and the lower right corner prediction module for the obtained feature map.
In addition, as one of the remarkable innovations of the invention, in order to force the feature extraction network to acquire richer image features, the boundary constraint center attention module provided by the invention is added in the corner prediction stage only during network training, so that the network additionally predicts a center point position and a boundary constraint vector of an object to be detected. In the network training process, the data set is divided into a training set, a verification set and a test set, wherein the training set is used for optimizing network parameters, the verification set is used for selecting an optimal result from parameter sets obtained by multiple times of training, and the test set is used for checking the generalization capability of the network. After training, the central attention module is pruned, so that the module does not bring about the increase of time cost and model size in the detection process.
In the network post-processing stage, based on the predicted angular point position and the centripetal vector, the angular points belonging to the same target are paired, and repeated detection results are eliminated through a Soft-NMS algorithm, so that the position information and the category information of the detection target are confirmed and output. When the network output includes a detection result classified as unworn safety helmet, an alarm signal is sent out.
The network structure diagram of the corner prediction module is shown in fig. 2, and the helmet wearing detection system based on the central attention centripetal network detects a helmet target as a pair of key points, namely an upper left corner point and a lower right corner point of the boundary box. Since the corner points of an object are often located outside the object, there is usually no obvious local visual feature that can be used to determine whether a pixel is a corner point of the object, so the corner point pooling operation is crucial to the corner point prediction, and the conventional corner point pooling layer makes each pixel point perform the maximum pooling operation along the horizontal direction, searches for the upper boundary or the lower boundary of the object, and searches for the left boundary or the right boundary of the object along the vertical direction. However, this operation makes the corner points more sensitive to the edge features of the object, while the internal information of the object is not well focused on the corner point locations. In order to solve the problem, without adding additional time cost, the embodiment adds a vertical-horizontal corner pooling layer in the corner prediction module aiming at the safety helmet detection task.
For the helmet detection task, the vertical-horizontal corner pooling layer focuses on focusing the horizontal edge features and internal features of the target to the corner positions that need to be predicted, since the upper or lower edge of the target contains more critical information (e.g., whether the helmet is worn, whether there is a human torso, etc.) and the sides contain more irrelevant background information. The vertical-horizontal corner pooling layer uses a convolution block consisting of a 3x3 convolution layer, a BN layer and a ReLU activation function layer to extract the features of the input feature map in the target, the target horizontal edge and the corner positions respectively. The maximum value of the internal feature of the object is focused on the horizontal edge position by means of the vertical pooling operation and added with the feature value of the position, and then the new feature maximum value is focused on the corner position by means of the horizontal pooling operation and added with the local feature of the corner position. Therefore, at the corner position to be predicted, the network can simultaneously perceive the information of the local feature of the position, the horizontal edge feature of the target and the internal feature of the target, thereby being beneficial to making more accurate predictions. Compared with a common corner pooling layer, the vertical-horizontal corner pooling layer of the vertical-horizontal corner pooling layer can realize the improvement of detection precision without increasing extra time cost.
For the upper left corner prediction module, the operations of vertical pooling (upper pooling) and horizontal pooling (left pooling) of the pixel points (i, j) can be represented by the following formulas, respectively:
Top pooling:tij=max(fij,t(i+1)j)
Left Pooling:tij=max(fij,ti(j+1))
Similarly, for the lower right corner prediction module, the operations of vertical pooling (lower pooling) and horizontal pooling (right pooling) of the pixel points (i, j) can be represented by the following formulas, respectively:
bottom pooling:tij=max(fij,t(i-1)j)
Right Pooling:tij=max(fij,t(i-1)j)
After the vertical-horizontal corner pooling layer, a set of corner positions is predicted in a thermo-graphic manner using a convolution block containing a 3x3 convolution layer, a BN (batch normalization) layer, reLu function activation layer followed by a 1x1 convolution layer, BN layer, reLu function activation layer to represent the coordinates and corresponding confidence levels of the corners of the different target classes. In order to obtain a more accurate bounding box, and repair the precision loss caused in the process of downsampling the feature extraction network, the corner prediction module network also predicts an offset (deltax, deltay) to fine tune the position of the corner. Meanwhile, in order to match the upper left corner and the lower right corner of the object, the module predicts a centripetal vector pointing to the center of the object for each detected corner.
For a certain object to be detected, assuming that the upper left corner coordinate is (tl x,tly) and the lower right corner coordinate is (br x,bry), the center point coordinate can be calculated:
The desired predicted vector of centripetal is defined by the following formula, where s is the feature extraction network downsampling rate, which in this embodiment is taken as 4:
In addition, due to the corner pooling operation, the feature map forms a salient region with a cross star shape with the corner position as the center, and in order to capture the abundant context information, the cross star deformable convolution is added in the embodiment, which is a special deformable convolution structure. As shown in fig. 4, for the input feature map, a two-dimensional guide vector δ tl (or δ br, corresponding to the upper left corner and the upper right corner respectively) is first predicted for each pixel by convolution, and the prediction of the guide vector is supervised in the training phase, where δ is defined as follows:
After the pilot vector is obtained, a convolutional layer is used to generate an offset field for adjusting the shape of the convolutional kernel. Thus, the spider deformable convolution can purposefully capture visual features that are clustered at the edge of the target. The central attention centripetal network uses the feature map output by the cross star deformable convolution for the prediction of the centripetal vector, and simultaneously splices the feature map with the original feature map, so that richer visual features are introduced to jointly predict the positions and the types of the angular points.
According to the embodiment, the vertical-horizontal corner pooling layer is arranged in the corner prediction module, so that the internal features of the target are gathered at the corner positions with low detection time cost, and the detection capability is improved. Meanwhile, the cross star deformable convolution can capture richer visual features, and is spliced with the original feature map, so that richer deep features are introduced to make more accurate predictions on the positions and the categories of the corner points.
In order to more effectively extract internal features of a target and reduce the dependence of a safety helmet wearing detection system on edge information, the embodiment provides a boundary constraint center attention module only used in a training stage of the safety helmet wearing detection system, so that a network additionally predicts a center point of the target and a group of boundary constraint vectors in the training stage to force the feature to extract the internal features of the target of the network attention, and the boundary constraint center attention module is a cost-free module capable of improving network detection precision.
The schematic diagram of the boundary constraint center attention module structure proposed in this embodiment is shown in fig. 3, and since the most important internal features of a target are not necessarily precisely concentrated in the geometric center, a central pooling layer is added in the module in this embodiment. To determine if a pixel is a center point, it is necessary to find the maximum values in its horizontal and vertical directions and add them, and for each pixel on the input feature map, the module first uses a convolution block consisting of a 3x3 convolution layer, BN layer, and ReLU activation function layer to extract and preserve the local features of that location. A left pooling module and a right pooling module are connected in series to search for the characteristic maximum value in the horizontal direction, and an upper pooling module and a lower pooling module are connected in series to search for the characteristic maximum value in the vertical direction and add the characteristic maximum value. Through the central pooling layer, the boundary constraint central attention module can acquire the most discriminant internal feature information, and force the feature extraction network to learn the capability of extracting the critical internal information.
Through the central pooling layer, for a target to be detected, the central point coordinates (ct x,cty) and the corresponding confidence scores of the target are predicted through a group of convolution blocks in a thermal pattern mode, and a group of offsets (delta ct x,Δcty) are predicted to obtain more accurate positions.
Meanwhile, for the corner prediction module, the feature extraction network is made sensitive to the size of the target, so that the network is facilitated to predict more accurate centripetal vectors to improve the accuracy of corner pairing, therefore, a boundary constraint vector prediction branch is added to the boundary constraint center attention module, a group of boundary constraint vectors are predicted to represent the boundary size constraint of the target, the branch is used for forcing the feature extraction network to capture the scale information of the target, and the predicted boundary constraint (l h,lw) can be expressed as:
the training process of the helmet wearing detection system of the embodiment is as follows:
Step 1, image collection: in the embodiment, the construction scene images are retrieved and downloaded by adopting a web crawler technology, and in other embodiments, the required images can be obtained by directly monitoring the image data through a construction site, wherein the images comprise personnel wearing safety helmets or not wearing the safety helmets, and the images which do not meet the training requirements are screened and removed: including single background, non-construction scenes, advertising posters, etc. To improve generalization of the trained model, some difficult scenes such as small-scale personnel, night sites, people with shielding or dense crowds should be added appropriately.
Step 2, image labeling: and labeling the positions and the categories of the personnel safety caps for each image. The marked area is the head area of the person, i.e. the area containing the whole area with or without the helmet. The marked information is the left upper corner and right lower corner coordinates of the area, which are respectively indicated by (x min,ymin),(xmax,ymax), and the color of whether the area is wearing the safety helmet or not and the color of wearing the safety helmet are marked, namely the area is divided into 5 categories (blue, red, yellow, white and no wearing the safety helmet). The labeling information is stored as an xml file, and the information is stored.
Step 3, data set division: dividing the data into data sets according to the proportion of 5:2.5:2.5 (training set: verification set: test set), wherein the training set is used for back propagation optimization of network parameters, the verification set is used for selecting an optimal model parameter set from the results of multiple training, the test set is used for final test of model effect, and the divided data set information is respectively stored into 3 JSON files (namely MS COCO data set format).
Step 4, training a network: building a complete center attention network model, initializing model parameters, and using modules comprising: the system comprises a feature extraction network, a corner prediction module and a boundary constraint center attention module. And performing supervised training on the network by using the constructed training set, and optimizing network parameters according to back propagation until the network converges. (during the network training phase, corner pairing and NMS post-processing of the network output results are not required) this step is repeated multiple times to obtain multiple parameter sets for choosing the optimal results in the validation set.
Step 5, network test: for the network after training, firstly, the boundary constraint center attention module is pruned, the average accuracy is obtained on the verification set, the parameter group with the highest average accuracy is selected as the finally obtained training result, and the generalization of the training result is tested on the test set.

Claims (9)

1. The method for detecting the wearing of the safety helmet based on the central attention network is characterized by comprising the following steps of:
S1, normalizing the size of an image to be detected;
S2, extracting features of an image to be detected to obtain a feature image, and carrying out offset prediction, angular point position prediction and centripetal vector prediction on the feature image by an angular point prediction module, wherein the angular point prediction module comprises: a vertical-horizontal corner pooling layer, an offset prediction module, a corner position prediction module and a centripetal vector prediction module;
The vertical-horizontal corner pooling layer acquires a feature image output by a feature extraction network, and the vertical-horizontal corner pooling layer respectively extracts features of an input feature image in a target, a target horizontal edge and a corner position by using a convolution block consisting of a 3x3 convolution layer, a BN layer and a ReLU activation function layer; focusing the maximum value of the internal characteristics of the target on the horizontal edge position by means of vertical pooling operation, adding the maximum value to the characteristic value of the position, and then focusing the new maximum value of the characteristics on the corner position by means of horizontal pooling operation, adding the new maximum value to the local characteristics of the corner position, so as to realize the simultaneous perception of the information of the local characteristics of the corner position, the horizontal edge characteristics of the target and the internal characteristics of the target which are required to be predicted; the characteristics output by the vertical-horizontal corner pooling layer respectively obtain an offset heat map, a corner position heat map and a centripetal vector heat map through an offset prediction module, a corner position prediction module and a centripetal vector prediction module;
S3, carrying out local maximum screening and TopK screening treatment on the diagonal point position heat map, filtering redundant corner point detection results to obtain an upper left corner point coordinate set and a lower right corner point coordinate set, and the category and the confidence of each corner point in each coordinate set, wherein the corner point category comprises: head wearing safety helmet and head not wearing safety helmet;
s4, correcting the angular point position by using the offset heat map to obtain a corrected upper left angular point coordinate set and a corrected lower right angular point coordinate set;
S5, constructing candidate detection frames through the corrected upper left corner coordinate set and lower right corner coordinate set, and calculating centripetal areas of the candidate detection frames;
S6, carrying out post-processing on the candidate detection frames to obtain final detection frames, and judging the wearing condition of the safety helmet according to the types of the detection frames.
2. The method for detecting the wearing of the helmet based on the central attention network according to claim 1, wherein the corner categories of the head-wearing helmet comprise: the head wears red safety helmet, the head wears blue safety helmet, the head wears white safety helmet, and the head wears yellow safety helmet.
3. The method for detecting the wearing of a helmet based on a central attention network according to claim 1, wherein the feature extraction in step S2 is implemented by a feature extraction network, the feature extraction network is provided with a downsampling rate, and the correction of the diagonal points in step S4 includes the steps of:
s41, taking x and y coordinates of each corner in the upper left corner coordinate set tl and the lower right corner coordinate set br as indexes, and retrieving an offset heat map to obtain the offset of each corner;
S42, multiplying each corner coordinate by the downsampling rate of the feature extraction network to map the corner position in the offset heat map back to the corresponding position of the input image, and adding the coordinates and the offset to correct the precision loss in the downsampling process, so as to obtain a corrected upper left corner coordinate set tl and lower right corner coordinate set br.
4. A method for detecting the wearing of a helmet based on a central attention network according to claim 3, wherein the method for constructing a candidate detection frame in step S5 is as follows:
S51, acquiring a corrected upper left corner coordinate set tl and a corrected lower right corner coordinate set br, taking x and y coordinates of each corner in the upper left corner coordinate set tl and the corrected lower right corner coordinate set br as indexes, searching a centripetal vector heat map to acquire a centripetal vector of a current corner, adding each upper left corner coordinate in the upper left corner coordinate set tl with a corresponding centripetal vector, thereby acquiring an upper left corner target center coordinate (tl ctx,tlcty) corresponding to each upper left corner, and similarly calculating to obtain a lower right corner target center coordinate (br ctx,brcty) of each lower right corner;
S52, combining the left upper corner coordinates in the left upper corner coordinate set tl with all the right lower corner coordinates in the right lower corner coordinate set br one by one, performing exhaustive pairing to obtain a pairing matrix as a candidate detection frame, wherein a pairing matrix corresponding to a pairing result of the ith left upper corner in the left upper corner coordinate set tl and the jth right lower corner in the right lower corner coordinate set br is bboxij=(tlx i,tly i,brx j,bry j),, and storing target centers (tl ctx i,tlcty i) and (br ctx j,brcty j) corresponding to each bbox ij for pairing screening, and calculating target confidence of the target centers:
scoreij=(tl_scorei+br_scorej)/2,
where tl_score i is the confidence score of the i-th upper left corner and br_score j is the confidence score of the j-th lower right corner.
5. The method for detecting the wearing of a helmet based on a central attention centripetal network according to claim 4, wherein the method for calculating the centripetal region in step S5 is as follows: for all candidate detection boxes, a centripetal region Rcentral ij=(ctlx ij,ctly ij,cbrx ij,cbry ij), is defined in which ctl x ij,ctly ij,cbrx ij,cbry ij is calculated by the following formula:
Where μ is a hyperparameter, ctlx ij is the upper left corner abscissa of the centripetal region, ctly ij is the upper left corner ordinate of the centripetal region, cbrx ij is the lower right corner abscissa of the centripetal region, and cbry ij is the lower right corner ordinate of the centripetal region.
6. The method for detecting helmet wear based on central attention network according to claim 5, wherein the post-processing in step S6 includes filtering candidate detection frames, the filtering is to remove all impossible detection frames according to a filtering condition, the filtering condition includes:
if the categories of the two corner points are inconsistent, the detection frame is removed, and the judgment formula is as follows:
tl_clsesi≠br_clsesj
Wherein tl_ clses i is the category of the upper left corner of the detection frame, br_ clses j is the category of the lower right corner of the detection frame;
If the upper left corner point is not positioned at the upper left of the lower right corner point, the detection frame is removed, and the judgment formula is as follows:
tlxi>brxj|tlyi>bryj
wherein (tlx i,tlyi) is the coordinates of the upper left corner point of the detection frame and (brx j,bryj) is the coordinates of the lower right corner point of the detection frame;
if the predicted target center position is not in the centripetal region, the detection frame is removed, and the judgment formula is as follows:
Wherein the method comprises the steps of All are target center coordinates, (ctl x ij,ctly ij,cbrx ij,cbry ij) are obtained from the centripetal region.
7. The method for detecting the wearing of the safety helmet based on the central attention network according to claim 6, wherein after the candidate detection frames are screened and filtered, a Soft-NMS algorithm is adopted to remove detection frames with reliability which does not meet preset conditions in the overlapped detection frames.
8. A headgear wear detection system for performing a headgear wear detection method based on a central attention network as claimed in any one of claims 1 to 7, characterized in that the headgear wear detection system comprises: the device comprises a normalization module, a feature extraction network, a corner prediction module, a corner screening module, a corner position correction module, a detection frame construction module and a post-processing module;
the normalization module normalizes the size of the image to be detected; the feature extraction network performs feature extraction on the image to be detected to obtain a feature image, and the angular point prediction module performs offset prediction, angular point position prediction and centripetal vector prediction on the feature image to obtain an offset heat image, an angular point position heat image and a centripetal vector heat image; the corner screening module performs local maximum screening and TopK screening treatment on the corner position heat map, filters redundant corner detection results to obtain an upper left corner coordinate set and a lower right corner coordinate set, and the category and the confidence of each corner in each coordinate set, wherein the corner category comprises: head wearing safety helmet and head not wearing safety helmet; the corner position correction module corrects the corner positions by using the offset heat map to obtain a corrected upper left corner coordinate set and a corrected lower right corner coordinate set; the detection frame construction module constructs candidate detection frames through the corrected upper left corner coordinate set and lower right corner coordinate set, and calculates centripetal areas of the candidate detection frames; and the post-processing module carries out post-processing on the detection frame to obtain a final detection frame, and judges the wearing condition of the safety helmet according to the type of the detection frame.
9. The headgear wear detection system of claim 8, wherein during training of the headgear wear detection system, the headgear wear detection system further comprises a boundary-constrained center attention module comprising: the system comprises a central pooling layer, an offset prediction module, a central point position prediction module and a boundary constraint vector prediction module;
The central pooling layer acquires a feature map output by a feature extraction network, the central pooling layer enables a boundary constraint central attention module to acquire the most discriminant internal feature information, the feature extraction network is forced to learn the capability of extracting critical internal information, a target to be detected is respectively transmitted to an offset prediction module, a central point position prediction module and a boundary constraint vector prediction module through the central pooling layer, the central point position prediction module predicts a central point coordinate and a corresponding confidence score, the offset prediction module predicts an offset to enable the position to be more accurate, the boundary constraint vector prediction module predicts a group of boundary constraint vectors to represent boundary size constraints of the target, so that the feature extraction network is forced to capture the scale information of the target, and the boundary constraint (l h,lw) is expressed as:
where s is the downsampling rate, (tl x,tly) is the coordinates of the upper left corner and (br x,bry) is the coordinates of the lower right corner.
CN202111397722.1A 2021-11-23 2021-11-23 Helmet wearing detection method and system based on central attention network Active CN114067365B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111397722.1A CN114067365B (en) 2021-11-23 2021-11-23 Helmet wearing detection method and system based on central attention network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111397722.1A CN114067365B (en) 2021-11-23 2021-11-23 Helmet wearing detection method and system based on central attention network

Publications (2)

Publication Number Publication Date
CN114067365A CN114067365A (en) 2022-02-18
CN114067365B true CN114067365B (en) 2024-07-02

Family

ID=80275556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111397722.1A Active CN114067365B (en) 2021-11-23 2021-11-23 Helmet wearing detection method and system based on central attention network

Country Status (1)

Country Link
CN (1) CN114067365B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114596529A (en) * 2022-03-14 2022-06-07 北京有竹居网络技术有限公司 Video frame identification method and device, readable medium and electronic equipment
CN117726991B (en) * 2024-02-07 2024-05-24 金钱猫科技股份有限公司 High-altitude hanging basket safety belt detection method and terminal

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113011384A (en) * 2021-04-12 2021-06-22 重庆邮电大学 Anchor-frame-free target detection method based on lightweight convolution
CN113139437A (en) * 2021-03-31 2021-07-20 成都飞机工业(集团)有限责任公司 Helmet wearing inspection method based on YOLOv3 algorithm

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242088B (en) * 2020-01-22 2023-11-28 上海商汤临港智能科技有限公司 Target detection method and device, electronic equipment and storage medium
CN113361425A (en) * 2021-06-11 2021-09-07 珠海路讯科技有限公司 Method for detecting whether worker wears safety helmet or not based on deep learning

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113139437A (en) * 2021-03-31 2021-07-20 成都飞机工业(集团)有限责任公司 Helmet wearing inspection method based on YOLOv3 algorithm
CN113011384A (en) * 2021-04-12 2021-06-22 重庆邮电大学 Anchor-frame-free target detection method based on lightweight convolution

Also Published As

Publication number Publication date
CN114067365A (en) 2022-02-18

Similar Documents

Publication Publication Date Title
CN110543867B (en) Crowd density estimation system and method under condition of multiple cameras
CN111898514B (en) Multi-target visual supervision method based on target detection and action recognition
CN108062349B (en) Video monitoring method and system based on video structured data and deep learning
CN114067365B (en) Helmet wearing detection method and system based on central attention network
CN110826514A (en) Construction site violation intelligent identification method based on deep learning
CN110414400B (en) Automatic detection method and system for wearing of safety helmet on construction site
CN113903081A (en) Visual identification artificial intelligence alarm method and device for images of hydraulic power plant
CN110728252B (en) Face detection method applied to regional personnel motion trail monitoring
CN113052876B (en) Video relay tracking method and system based on deep learning
CN115311241B (en) Underground coal mine pedestrian detection method based on image fusion and feature enhancement
CN113807240A (en) Intelligent transformer substation personnel dressing monitoring method based on uncooperative face recognition
CN102915433A (en) Character combination-based license plate positioning and identifying method
CN113743256A (en) Construction site safety intelligent early warning method and device
CN110245583A (en) A kind of intelligent identification Method of Vehicular exhaust survey report
CN116259002A (en) Human body dangerous behavior analysis method based on video
CN113111771A (en) Method for identifying unsafe behaviors of power plant workers
CN113191273A (en) Oil field well site video target detection and identification method and system based on neural network
CN116846059A (en) Edge detection system for power grid inspection and monitoring
CN117496129A (en) YOLOv 7-based improved factory safety wearing target detection method
CN114997279A (en) Construction worker dangerous area intrusion detection method based on improved Yolov5 model
CN117423157A (en) Mine abnormal video action understanding method combining migration learning and regional invasion
CN113240028A (en) Anti-sample block attack detection method based on class activation graph
CN116546287A (en) Multi-linkage wild animal online monitoring method and system
CN103903269B (en) The description method and system of ball machine monitor video
CN115937788A (en) Yolov5 industrial area-based safety helmet wearing detection method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant