CN114067365A - Safety helmet wearing detection method and system based on central attention centripetal network - Google Patents

Safety helmet wearing detection method and system based on central attention centripetal network Download PDF

Info

Publication number
CN114067365A
CN114067365A CN202111397722.1A CN202111397722A CN114067365A CN 114067365 A CN114067365 A CN 114067365A CN 202111397722 A CN202111397722 A CN 202111397722A CN 114067365 A CN114067365 A CN 114067365A
Authority
CN
China
Prior art keywords
corner
centripetal
corner point
point
upper left
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111397722.1A
Other languages
Chinese (zh)
Inventor
蔡念
刘至键
陈妍帆
陈煜�
张承滨
王晗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN202111397722.1A priority Critical patent/CN114067365A/en
Publication of CN114067365A publication Critical patent/CN114067365A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks

Abstract

The invention provides a safety helmet wearing detection method and system based on a central attention centripetal network, and relates to the technical field of video monitoring intelligent analysis. The method utilizes the central attention centripetal network to acquire the image to carry out the wearing detection of the safety helmet, and the central attention centripetal network automatically learns the characteristics without manually designing the characteristics, so the universality is good; the central attention centripetal network adopts a corner point prediction module to predict the positions of an upper left corner point and a lower right corner point respectively, a detection frame of a target is obtained through the combination of the upper left corner point and the lower right corner point, the corner point prediction module carries out offset prediction, corner point position prediction and centripetal vector prediction, so that more accurate corner point positions are obtained, higher detection precision is obtained, a boundary constraint central attention module is added in the training process, a vertical-horizontal corner point pooling layer is added in the corner point prediction process, and the prediction accuracy is further improved.

Description

Safety helmet wearing detection method and system based on central attention centripetal network
Technical Field
The invention relates to the technical field of video monitoring intelligent analysis, in particular to a safety helmet wearing detection method and system based on a central attention centripetal network.
Background
The construction industry is one of the prop industries in China, however, due to factors such as long production period, many high-altitude operations in the open air, complex construction process and the like, the construction industry is also one of the most serious industries for production safety accidents in China, in the construction industry, high-altitude falling accidents and object hitting accidents account for the main part of the total number of accidents, and the consequences of head injury are particularly serious relative to other parts of the body. The use of the safety helmet correctly can greatly reduce the fatal damage of the head, however, in the actual work, many workers often have carelessness and do not have the specification to wear the safety helmet. Therefore, the wearing condition of the safety helmet of the worker is monitored through an automatic technology, early warning is timely given out, and the safety helmet has great significance for protecting the life safety of the worker and reducing the accident rate of the construction industry. Meanwhile, the color of the safety helmet is recognized, so that constructors with different identities and posts can be distinguished, and the management of constructors on a construction site is enhanced.
Early headgear wear detection methods were based primarily on sensors or traditional vision techniques. The detection method based on the sensor is focused on using remote positioning and tracking technologies, such as RFID and wireless local area network, and carries out real-time positioning and tracking by embedding an electronic tag and a sensor on the safety helmet, and meanwhile, whether the safety helmet is in a worn state or not is confirmed by physical detection means such as pressure, infrared beams, heat and the like. The method has low efficiency and high cost, and because thousands of safety helmets need to be embedded with electronic tags and sensors and a fixed data base station is established, higher installation and maintenance cost is difficult to avoid, and the application mode has a plurality of inconveniences and is not beneficial to actual large-scale popularization and use.
Compared with the prior art, the method based on vision combines the video monitoring and the computer vision detection technology, can realize the implementation automatic monitoring of the construction site only by configuring a certain number of cameras to cover the construction range, timely detects workers who do not correctly wear the safety helmet according to the monitoring images and gives an alarm, and has lower cost and flexible application mode. However, in the early methods, mainly based on the conventional image processing technology, image features such as LBP, Hu invariant moment, color histogram, HOG and the like need to be extracted through a manually designed feature extraction algorithm, and detection results are obtained by combining the conventional machine learning method, so that the method has the defects of low speed, complex design and poor generalization.
Therefore, in recent years, methods based on deep learning have become the mainstream direction of the research of the helmet wearing detection problem. The safety helmet wearing detection method based on deep learning takes data as drive, automatically learns characteristics from the existing data, and can solve the problem that manually designed characteristics cannot adapt to actual detection scenes. However, due to the complexity of the detection environment, although the universality is greatly improved compared with the traditional visual technology, the helmet wearing detection method based on deep learning still faces a lot of challenges, and some existing schemes are easily affected when facing factors such as small target detection, night scenes, intensive people, similar articles and the like in practical application and limit the detection precision, so that a real-time helmet wearing detection method with high precision and high robustness is constructed by researching a novel target detection model, and the method has strong practical significance.
Publication No.: CN112949354A, 2021-06-11 open date, is susceptible to small target detection, night scenes, dense crowds, similar articles and other factors in practical application, and has low detection precision.
Disclosure of Invention
In order to overcome the technical problems, the invention provides a method and a system for detecting the wearing of a safety helmet based on a central attention centripetal network, which have high precision and high robustness.
The technical scheme of the invention is as follows:
a method for headgear wear detection based on a central attention-directed network, the method comprising the steps of:
s1, normalizing the size of the image to be detected;
s2, extracting features of the image to be detected to obtain a feature map, and performing offset prediction, corner position prediction and centripetal vector prediction on the feature map by using a corner prediction module to obtain an offset heat map, a corner position heat map and a centripetal vector heat map;
s3, carrying out local maximum value screening and TopK screening processing on the corner position heat map, filtering out redundant corner detection results, obtaining an upper left corner point coordinate set and a lower right corner point coordinate set, and the category and the confidence coefficient of each corner in each coordinate set, wherein the corner categories comprise: the head wears the safety helmet and does not wear the safety helmet;
s4, correcting the corner positions by using the offset heat map to obtain a corrected coordinate set of a top left corner point and a corrected coordinate set of a bottom right corner point;
s5, constructing a candidate detection frame through the corrected coordinate set of the upper left corner point and the coordinate set of the lower right corner point, and calculating a centripetal area of the candidate detection frame;
and S6, carrying out post-processing on the candidate detection frames to obtain a final detection frame, and judging the wearing condition of the safety helmet according to the types of the detection frames.
The technical scheme provides a safety helmet wearing detection method and a system based on a central attention centripetal network, the safety helmet wearing detection is carried out by utilizing an image obtained by the central attention centripetal network, the central attention centripetal network automatically learns characteristics without manually designing the characteristics, and the universality is good; the central attention centripetal network adopts a corner point prediction module to predict the positions of an upper left corner point and a lower right corner point respectively, a detection frame of a target is obtained through the combination of the upper left corner point and the lower right corner point, and the corner point prediction module carries out offset prediction, corner point position prediction and centripetal vector prediction, so that more accurate corner point positions are obtained, and higher detection precision is obtained.
Further, the corner categories of the head-worn safety helmet include: the head wears a red safety helmet, the head wears a blue safety helmet, the head wears a white safety helmet and the head wears a yellow safety helmet.
Further, the feature extraction in step S2 is implemented by a feature extraction network, the feature extraction network has a down-sampling rate, and the correction of the corner position in step S4 includes the steps of:
s41, retrieving an offset heat map by taking the x and y coordinates of each corner point in the upper left corner point coordinate set tl and the lower right corner point coordinate set br as indexes to acquire the offset of each corner point;
and S42, multiplying each corner coordinate by the down-sampling rate of the feature extraction network to map the corner position in the offset heat map back to the corresponding position of the input image, adding the coordinates and the offset to correct the precision loss in the down-sampling process, and obtaining a corrected upper left corner point coordinate set tl and a corrected lower right corner point coordinate set br.
Further, the method for constructing the candidate detection box in step S5 is as follows:
s51, obtaining a corrected coordinate set tl of upper left corner point and a corrected coordinate set br of lower right corner point, searching a centripetal vector heat map by taking x and y coordinates of each corner point in the coordinate set tl of upper left corner point and the coordinate set br of lower right corner point as indexes, obtaining a centripetal vector of a current corner point, and adding each coordinate of upper left corner point in the coordinate set tl of upper left corner point and a corresponding centripetal vector, thereby obtaining a target central coordinate (tl) of upper left corner point corresponding to each upper left corner pointctx,tlcty) And similarly calculating the target center of the lower right corner of each lower right cornerLabel (br)ctx,brcty);
S52, combining the coordinates of the upper left corner in the upper left corner coordinate set tl with the coordinates of all the lower right corners in the lower right corner coordinate set br one by one, exhaustively pairing to obtain a pairing matrix as a candidate detection frame, wherein the pairing matrix corresponding to the pairing result of the ith upper left corner in the upper left corner coordinate set tl and the jth lower right corner in the lower right corner coordinate set br is bboxij=(tlx i,tly i,brx j,bry j) And each bbox is savedijCorresponding target center (tl)ctx i,tlcty i) And (br)ctx j,brcty j) The method is used for pairing screening, and the target confidence of the target center is calculated as follows:
scoreij=(tl_scorei+br_scorej)/2,
wherein tl _ scoreiBr _ score, confidence score of ith top left cornerjIs the confidence score of the j-th lower right corner point.
Further, the method for calculating the centripetal region in step S5 is as follows: for all candidate detection frames, defining a centripetal region Rcentral ij=(ctlx ij,ctly ij,cbrx ij,cbry ij) Wherein ctlx ij,ctly ij,cbrx ij,cbry ijCalculated by the following formula:
Figure BDA0003370617770000041
Figure BDA0003370617770000042
Figure BDA0003370617770000043
Figure BDA0003370617770000044
where μ is the hyperparameter, ctlxijIs the abscissa of the upper left corner point of the centripetal region, ctlijAs the vertical coordinate of the upper left corner of the centripetal region, cbrxijAs the abscissa of the lower right corner point of the centripetal region, cbryijIs the vertical coordinate of the lower right corner point of the centripetal area.
Further, the post-processing of step S6 includes performing a filtering process on the candidate test frames, where the filtering process is to remove all impossible test frames according to a filtering condition, where the filtering condition includes:
if the categories of the two angular points are not consistent, removing the detection frame, and judging the formula as follows:
tl_clsesi≠br_clsesj
wherein tl _ clsesiFor the category of the top left corner point of the detection box, br _ clsesjThe category of the lower right corner point of the detection frame;
if the upper left corner point is not positioned at the upper left of the lower right corner point, the detection frame is removed, and the judgment formula is as follows:
tlxi>brxj|tlyi>bryj
wherein (tlx)i,tlyi) As the coordinates of the upper left corner point of the detection box, (brx)j,bryj) Coordinates of a lower right corner point of the detection frame;
if the predicted target center position is not in the centripetal area, removing the detection frame, and judging according to the formula:
Figure BDA0003370617770000045
Figure BDA0003370617770000046
Figure BDA0003370617770000051
Figure BDA0003370617770000052
wherein
Figure BDA0003370617770000053
Are all target center coordinates, (ctl)x ij,ctly ij,cbrx ij,cbry ij) Obtained from the centripetal zone.
Further, after the candidate detection frames are screened and filtered, a Soft-NMS algorithm is adopted to remove the detection frames with the reliability not meeting the preset condition in the overlapped detection frames.
A headgear wear detection system, the headgear wear detection system comprising: the device comprises a normalization module, a feature extraction network, an angular point prediction module, an angular point screening module, an angular point position correction module, a detection frame construction module and a post-processing module;
the normalization module normalizes the size of the image to be detected; the feature extraction network extracts features of an image to be detected to obtain a feature map, and the corner prediction module performs offset prediction, corner position prediction and centripetal vector prediction on the feature map to obtain an offset heat map, a corner position heat map and a centripetal vector heat map; the corner screening module performs local maximum screening and TopK screening processing on the corner position heat map, filters redundant corner detection results, obtains an upper left corner point coordinate set and a lower right corner point coordinate set, and a category and a confidence of each corner in each coordinate set, wherein the corner categories comprise: the head wears the safety helmet and does not wear the safety helmet; the corner point position correction module corrects the corner point position by using the offset heat map to obtain a corrected coordinate set of a left upper corner point and a corrected coordinate set of a right lower corner point; the detection frame construction module constructs a candidate detection frame through the corrected coordinate set of the upper left corner point and the coordinate set of the lower right corner point, and calculates a centripetal area of the candidate detection frame; and the post-processing module performs post-processing on the detection frame to obtain a final detection frame, and judges the wearing condition of the safety helmet according to the type of the detection frame.
Further, in the process of training the safety helmet wearing detection system, the safety helmet wearing detection system further comprises a boundary constraint center attention module, and the boundary constraint center attention module comprises: the device comprises a central pooling layer, an offset prediction module, a central point position prediction module and a boundary constraint vector prediction module;
a central pooling layer acquires a feature graph output by a feature extraction network, a boundary constraint central attention module acquires internal feature information with the greatest discrimination through the central pooling layer to force the feature extraction network to learn the capability of extracting critical internal information, an object to be detected passes through the central pooling layer and is respectively transmitted to an offset prediction module, a central point position prediction module and a boundary constraint vector prediction module, the central point position prediction module predicts a central point coordinate and a corresponding confidence score, the offset prediction module predicts an offset to enable the position to be more accurate, the boundary constraint vector prediction module predicts a group of boundary constraint vectors to represent the boundary size constraint of the object, so that the scale information of the network capture object is forcibly extracted, and the boundary constraint (l) is definedh,lw) Expressed as:
Figure BDA0003370617770000061
where s is the down-sampling rate, (tl)x,tly) As the coordinates of the upper left corner, (br)x,bry) The coordinates of the lower right corner point.
Further, the corner prediction module comprises: the device comprises a vertical-horizontal corner pooling layer, an offset prediction module, a corner position prediction module and a centripetal vector prediction module;
the method comprises the steps that a vertical-horizontal corner pooling layer obtains a feature map output by a feature extraction network, and the vertical-horizontal corner pooling layer uses a convolution block consisting of a 3x3 convolution layer, a BN layer and a ReLU activation function layer to respectively extract features of an input feature map in a target, a target horizontal edge and a corner position; focusing the maximum value of the internal feature of the target at the position of a horizontal edge by means of vertical pooling operation, adding the maximum value of the internal feature of the target to the feature value of the position, and then focusing the new maximum value of the feature at the position of an angular point by means of horizontal pooling operation and adding the new maximum value of the feature to the local feature of the angular point; the features output by the vertical-horizontal corner pooling layer pass through an offset prediction module, a corner position prediction module and a centripetal vector prediction module to obtain an offset heat map, a corner position heat map and a centripetal vector heat map respectively.
The technical scheme provides a safety helmet wearing detection method and a system based on a central attention centripetal network, and compared with the prior art, the technical scheme of the invention has the beneficial effects that: the central attention centripetal network is used for acquiring images to carry out wearing detection on the safety helmet, the central attention centripetal network automatically learns the characteristics without manually designing the characteristics, and the universality is good; the central attention centripetal network adopts a corner point prediction module to predict the positions of an upper left corner point and a lower right corner point respectively, a detection frame of a target is obtained through the combination of the upper left corner point and the lower right corner point, the corner point prediction module carries out offset prediction, corner point position prediction and centripetal vector prediction, so that more accurate corner point positions are obtained, higher detection precision is obtained, a boundary constraint central attention module is added in the training process, a vertical-horizontal corner point pooling layer is added in the corner point prediction process, and the prediction accuracy is further improved.
Drawings
FIG. 1 is a schematic diagram of a central attention centripetal network architecture;
FIG. 2 is a schematic diagram of a network architecture of a corner prediction module;
FIG. 3 is a schematic diagram of a network structure of a boundary constraint center attention module;
fig. 4 is a schematic diagram of a cross-shaped star deformable convolution module network structure.
Detailed Description
For clearly explaining the method and system for detecting the wearing of the safety helmet based on the central attention centripetal network, the invention is further described with reference to the embodiments and the accompanying drawings, but the scope of the invention should not be limited thereby.
Example 1
A safety helmet wearing detection method based on a central attention centripetal network comprises the following steps:
s1, normalizing the size of the image to be detected;
s2, extracting features of the image to be detected to obtain a feature map, and performing offset prediction, corner position prediction and centripetal vector prediction on the feature map by using a corner prediction module to obtain an offset heat map, a corner position heat map and a centripetal vector heat map;
s3, carrying out local maximum value screening and TopK screening processing on the corner position heat map, filtering out redundant corner detection results, obtaining an upper left corner point coordinate set and a lower right corner point coordinate set, and the category and the confidence coefficient of each corner in each coordinate set, wherein the corner categories comprise: the head wears the safety helmet and does not wear the safety helmet;
s4, correcting the corner positions by using the offset heat map to obtain a corrected coordinate set of a top left corner point and a corrected coordinate set of a bottom right corner point;
s5, constructing a candidate detection frame through the corrected coordinate set of the upper left corner point and the coordinate set of the lower right corner point, and calculating a centripetal area of the candidate detection frame;
and S6, carrying out post-processing on the candidate detection frames to obtain a final detection frame, and judging the wearing condition of the safety helmet according to the types of the detection frames.
The embodiment discloses a safety helmet wearing detection method based on a central attention centripetal network, which is characterized in that the central attention centripetal network is utilized to obtain images for carrying out safety helmet wearing detection, the central attention centripetal network automatically learns characteristics without manually designing the characteristics, and the universality is good; the central attention centripetal network adopts a corner point prediction module to predict the positions of an upper left corner point and a lower right corner point respectively, a detection frame of a target is obtained through the combination of the upper left corner point and the lower right corner point, and the corner point prediction module carries out offset prediction, corner point position prediction and centripetal vector prediction, so that more accurate corner point positions are obtained, and higher detection precision is obtained.
Example 2
A safety helmet wearing detection method based on a central attention centripetal network comprises the following steps:
s1, normalizing the size of the image to be detected;
s2, extracting features of the image to be detected to obtain a feature map, and performing offset prediction, corner position prediction and centripetal vector prediction on the feature map by using a corner prediction module to obtain an offset heat map, a corner position heat map and a centripetal vector heat map; the feature extraction is realized through a feature extraction network, and the feature extraction network is provided with a down sampling rate;
s3, carrying out local maximum value screening and TopK screening processing on the corner position heat map, filtering out redundant corner detection results, obtaining an upper left corner point coordinate set and a lower right corner point coordinate set, and the category and the confidence coefficient of each corner in each coordinate set, wherein the corner categories comprise: the head wears the safety helmet and does not wear the safety helmet;
the corner point categories of the head-worn crash helmet include: the head wears a red safety helmet, the head wears a blue safety helmet, the head wears a white safety helmet and the head wears a yellow safety helmet;
s4, correcting the corner positions by using the offset heat map to obtain a corrected coordinate set of a top left corner point and a corrected coordinate set of a bottom right corner point;
the correction of the angular position comprises the following steps:
s41, retrieving an offset heat map by taking the x and y coordinates of each corner point in the upper left corner point coordinate set tl and the lower right corner point coordinate set br as indexes to acquire the offset of each corner point;
and S42, multiplying each corner coordinate by the down-sampling rate of the feature extraction network to map the corner position in the offset heat map back to the corresponding position of the input image, adding the coordinates and the offset to correct the precision loss in the down-sampling process, and obtaining a corrected upper left corner point coordinate set tl and a corrected lower right corner point coordinate set br.
S5, constructing a candidate detection frame through the upper left corner point coordinate set and the lower right corner point coordinate set, and calculating a centripetal area of the candidate detection frame;
the method for constructing the candidate detection frame comprises the following steps:
s51, obtaining a corrected coordinate set tl of upper left corner point and a corrected coordinate set br of lower right corner point, searching a centripetal vector heat map by taking x and y coordinates of each corner point in the coordinate set tl of upper left corner point and the coordinate set br of lower right corner point as indexes, obtaining a centripetal vector of a current corner point, and adding each coordinate of upper left corner point in the coordinate set tl of upper left corner point and a corresponding centripetal vector, thereby obtaining a target central coordinate (tl) of upper left corner point corresponding to each upper left corner pointctx,tlcty) And similarly calculating to obtain the target central coordinate (br) of the lower right corner of each lower right cornerctx,brcty);
S52, combining the coordinates of the upper left corner in the upper left corner coordinate set tl with the coordinates of all the lower right corners in the lower right corner coordinate set br one by one, exhaustively pairing to obtain a pairing matrix as a candidate detection frame, wherein the pairing matrix corresponding to the pairing result of the ith upper left corner in the upper left corner coordinate set tl and the jth lower right corner in the lower right corner coordinate set br is bboxij=(tlx i,tly i,brx j,bry j) And each bbox is savedijCorresponding target center (tl)ctx i,tlcty i) And (br)ctx j,brcty j) The method is used for pairing screening, and the target confidence of the target center is calculated as follows:
scoreij=(tl_scorei+br_scorej)/2,
wherein tl _ scoreiBr _ score, confidence score of ith top left cornerjIs the confidence score of the j-th lower right corner point.
The method for calculating the centripetal region comprises the following steps: for all candidate detection frames, defining a centripetal region Rcentral ij=(ctlx ij,ctly ij,cbrx ij,cbry ij) Wherein ctlx ij,ctly ij,cbrx ij,cbry ijCalculated by the following formula:
Figure BDA0003370617770000091
Figure BDA0003370617770000092
Figure BDA0003370617770000093
Figure BDA0003370617770000094
where μ is the hyperparameter, ctlxijIs the abscissa of the upper left corner point of the centripetal region, ctlijAs the vertical coordinate of the upper left corner of the centripetal region, cbrxijAs the abscissa of the lower right corner point of the centripetal region, cbryijIs the vertical coordinate of the lower right corner point of the centripetal area.
And taking mu as 1/2.1 for the detection frame with the area larger than 3500, otherwise, taking mu as 1/2.4.
S6, post-processing the candidate detection frames to obtain a final detection frame, and judging the wearing condition of the safety helmet according to the types of the detection frames;
the post-processing comprises screening and filtering the candidate detection frames, wherein the screening and filtering is to remove all impossible detection frames according to screening conditions, and the screening conditions comprise:
if the categories of the two angular points are not consistent, removing the detection frame, and judging the formula as follows:
tl_clsesi≠br_clsesj
wherein tl _ clsesiFor the category of the top left corner point of the detection box, br _ clsesjThe category of the lower right corner point of the detection frame;
if the upper left corner point is not positioned at the upper left of the lower right corner point, the detection frame is removed, and the judgment formula is as follows:
tlxi>brxj|tlyi>bryj
wherein (tlx)i,tlyi) As the coordinates of the upper left corner point of the detection box, (brx)j,bryj) Coordinates of a lower right corner point of the detection frame;
if the predicted target center position is not in the centripetal area, removing the detection frame, and judging according to the formula:
Figure BDA0003370617770000101
Figure BDA0003370617770000102
Figure BDA0003370617770000103
Figure BDA0003370617770000104
wherein
Figure BDA0003370617770000105
Are all target center coordinates, (ctl)x ij,ctly ij,cbrx ij,cbry ij) Obtained from the centripetal zone.
And after the candidate detection frames are screened and filtered, removing the detection frames with low confidence level in the overlapped detection frames by adopting a Soft-NMS algorithm, and setting a confidence threshold of the Soft-NMS algorithm according to actual needs.
The safety helmet wearing detection system is constructed based on a central attention centripetal network, and comprises: the device comprises a normalization module, a feature extraction network, an angular point prediction module, an angular point screening module, an angular point position correction module, a detection frame construction module and a post-processing module;
the normalization module normalizes the size of the image to be detected; the feature extraction network extracts features of an image to be detected to obtain a feature map, and the corner prediction module performs offset prediction, corner position prediction and centripetal vector prediction on the feature map to obtain an offset heat map, a corner position heat map and a centripetal vector heat map; the corner screening module performs local maximum screening and TopK screening processing on the corner position heat map, filters redundant corner detection results, obtains an upper left corner point coordinate set and a lower right corner point coordinate set, and a category and a confidence of each corner in each coordinate set, wherein the corner categories comprise: the head wears the safety helmet and does not wear the safety helmet; the corner point position correction module corrects the corner point position by using the offset heat map to obtain a corrected coordinate set of a left upper corner point and a corrected coordinate set of a right lower corner point; the detection frame construction module constructs a candidate detection frame through the corrected coordinate set of the upper left corner point and the coordinate set of the lower right corner point, and calculates a centripetal area of the candidate detection frame; and the post-processing module performs post-processing on the detection frame to obtain a final detection frame, and judges the wearing condition of the safety helmet according to the type of the detection frame.
Example 3
The embodiment discloses a helmet wearing detection system based on a central attention centripetal network, as shown in fig. 1, the helmet wearing detection system includes: the system comprises a feature extraction network, an angular point prediction module and a post-processing module;
a feature extraction network acquires an image to be detected and performs feature extraction to obtain a feature map; the corner point prediction module predicts coordinates of an upper left corner point and a lower right corner point, performs corner point pairing, and forms a rectangular detection frame by the paired corner points; the post-processing module removes all impossible detection frames according to the screening condition, then removes the detection frames with low confidence level in the overlapped detection frames through a Soft-NMS algorithm to obtain a detection result, and sets a confidence threshold value of the Soft-NMS algorithm according to actual needs; and judging the wearing condition of the safety helmet by using the detection frame whether the head is not worn or not in the detection result.
In the detection implementation process, frame extraction processing is performed on an input video, and the size of an image frame is adjusted to a uniform resolution, so as to obtain an image to be detected, where the uniform resolution is 512 × 512 in this embodiment.
And in the characteristic extraction stage, the image to be detected is extracted through a light-weight characteristic extraction network and image characteristics are integrated to obtain an image characteristic diagram. In the present embodiment, DLAnet is used as a feature extraction network to ensure lightweight and detection real-time performance of the network module.
And in the corner point prediction stage, respectively predicting an upper left corner point, a lower right corner point and a centripetal vector for corner point pairing of the target through an upper left corner point prediction module and a lower right corner point prediction module.
In addition, as one of the significant innovations of the invention, in order to force the feature extraction network to obtain richer image features, the boundary constraint center attention module provided by the invention is added in the corner point prediction stage only during network training, so that the network additionally predicts a central point position and a boundary constraint vector for the target to be detected. In the network training process, a data set is divided into a training set, a verification set and a test set, wherein the training set is used for optimizing network parameters, the verification set is used for selecting an optimal result from parameter sets obtained by multiple times of training, and the test set is used for testing the generalization ability of the network. After training is finished, the central attention module is cut, so that the module does not bring increase of time cost and model size in the detection process.
In the post-processing stage of the network, based on the predicted corner position and centripetal vector, the corners belonging to the same target are paired, and repeated detection results are eliminated through a Soft-NMS algorithm so as to confirm and output the position information and the category information of the detection target. When the network outputs a detection result including that the safety helmet is not worn, an alarm signal is sent out.
A schematic diagram of a network structure of a corner prediction module is shown in fig. 2, and a helmet wearing detection system based on a central attention centripetal network detects a helmet target as a pair of key points, namely, an upper left corner and a lower right corner of a bounding box. Since the corner of a target is often located outside the target, and usually no obvious local visual features can be used to determine whether a pixel is a target corner, corner pooling is crucial to corner prediction, and a conventional corner pooling layer enables each pixel to perform maximum pooling along the horizontal direction, finds the upper boundary or the lower boundary of the target, and simultaneously finds the left boundary or the right boundary of the target along the vertical direction. However, this operation makes the corners more sensitive to edge features of the object, and the internal information of the object does not focus well on the corner locations. In order to solve the problem and not increase additional time cost, in the embodiment, for the task of detecting the safety helmet, a vertical-horizontal corner pooling layer is added to a corner prediction module.
For the task of helmet detection, the vertical-horizontal corner pooling layer focuses on focusing the horizontal edge features and internal features of the target to the corner locations that need to be predicted, since the upper or lower edge of the target contains more critical information (e.g., whether the helmet is worn, whether the torso of a human body is present, etc.), while the side contains more irrelevant background information. The vertical-horizontal corner pooling layer uses a volume block consisting of a 3x3 volume layer, a BN layer and a ReLU activation function layer to respectively extract the features of the input feature map in the target, the horizontal edge of the target and the corner position. By means of a vertical pooling operation, the maxima of the internal features of the object are focused to the horizontal edge positions, added to the feature values of that position, after which the new feature maxima are focused to the corner positions by means of a horizontal pooling operation, added to the local features of the corner positions. Therefore, at the corner position needing to be predicted, the network can simultaneously sense the information of the local feature of the position, the horizontal edge feature of the target and the internal feature of the target, and more accurate prediction is facilitated. Compared with a common corner pooling layer, the vertical-horizontal corner pooling layer of the proposed vertical-horizontal corner pooling layer can realize the improvement of detection precision without increasing additional time cost.
For the top left corner point prediction module, the operations of vertical pooling (top pooling) and horizontal pooling (left pooling) for pixel point (i, j) can be represented by the following equations, respectively:
Top pooling:tij=max(fij,t(i+1)j)
Left Pooling:tij=max(fij,ti(j+1))
similarly, for the lower right corner prediction module, the operations of vertically pooling (lower pooling) and horizontally pooling (right pooling) the pixel point (i, j) can be represented by the following formulas:
bottom pooling:tij=max(fij,t(i-1)j)
Right Pooling:tij=max(fij,t(i-1)j)
after the vertical-horizontal corner pooling layer, a convolution block is used to predict a group of corner positions in the form of a heat map to represent the coordinates and corresponding confidence degrees of the corners of different object classes, wherein the convolution block comprises a 3x3 convolution layer, a BN (batch normalization) layer, a ReLu function activation layer, and then a 1x1 convolution layer, a BN layer and a ReLu function activation layer. In order to obtain a more accurate bounding box and to repair the loss of precision caused during the downsampling process of the feature extraction network, the corner prediction module network also predicts an offset (Δ x, Δ y) to fine-tune the position of the corner. At the same time, the module predicts, for each detected corner, a centripetal vector pointing to the center of the target in order to match the top left corner and the bottom right corner of the target.
For a certain target to be detected, the coordinate of the upper left corner point is set as (tl)x,tly) The coordinate of the lower right corner point is (br)x,bry) Then, the coordinates of its center point can be calculated:
Figure BDA0003370617770000131
Figure BDA0003370617770000132
the centripetal vector to be predicted is defined by the following formula, where s is the feature extraction network down-sampling rate, and in this embodiment s is 4:
Figure BDA0003370617770000133
Figure BDA0003370617770000134
in addition, due to the corner pooling operation, the feature map forms a salient region with a cross-shaped star by taking the corner position as the center, and in order to capture rich context information of the salient region, the cross-shaped star deformable convolution is added in the embodiment, and the salient region is a special deformable convolution structure. As shown in FIG. 4, for the input feature map, a two-dimensional steering vector δ is first predicted for each pixel by convolutiontl(or delta)brCorresponding to the top left corner and the top right corner, respectively) and supervises the prediction of the guide vector in the training stage, wherein δ is defined as follows:
Figure BDA0003370617770000135
Figure BDA0003370617770000136
after the pilot vectors are obtained, an offset field is generated using one convolution layer for adjusting the shape of the convolution kernel. Thus, the cross-star deformable convolution can capture the visual features focused on the target edge with a targeted focus. The central attention centripetal network uses the feature map output by the cross star deformable convolution for predicting centripetal vectors, and meanwhile, the feature map is spliced with the original feature map, and richer visual features are introduced to jointly predict the positions and the categories of the corners.
In the embodiment, the vertical-horizontal corner pooling layer is arranged in the corner prediction module, so that the internal features of the target are gathered at the corner positions to improve the detection capability at a low detection time cost. Meanwhile, richer visual features can be captured by using the cross-shaped star deformable convolution and spliced with the original feature map, and richer deep features are introduced to make more accurate prediction on the positions and the categories of the angular points.
In order to extract the internal features of the target more effectively and reduce the dependence of the helmet wearing detection system on the edge information, the embodiment proposes a boundary constraint center attention module only used in the training phase of the helmet wearing detection system, so that the network additionally predicts the center point of the target and a group of boundary constraint vectors in the training phase to force the feature extraction network to pay attention to the internal features of the target, and the boundary constraint center attention module is a costless module capable of improving the network detection accuracy.
The schematic diagram of the boundary constraint center attention module structure proposed in this embodiment is shown in fig. 3, and since the most important internal features of an object are not necessarily exactly centered on its geometric center, this embodiment adds a center pooling layer to the module. In order to determine whether a pixel is the center point, it needs to find the maximum value in its horizontal and vertical directions and add them, and for each pixel on the input feature map, the module first uses a convolution block consisting of 3 × 3 convolution layer, BN layer and ReLU activation function layer to extract and retain the local feature of the location. A left pooling and right pooling module is connected in series to search for its characteristic maximum in the horizontal direction, while an upper pooling and lower pooling module is connected in series to search for its characteristic maximum in the vertical direction and add the two. Through the central pooling layer, the boundary constraint central attention module can acquire the internal feature information with the most discrimination, and the feature extraction network is forced to learn the capability of extracting the critical internal information.
For an object to be detected, its center point coordinate (ct) is predicted in the form of heat map through a set of volume blocks via a central pooling layerx,cty) And corresponding confidence scores, and predict a set of offsets (Δ ct)x,Δcty) To obtain a more accurate position.
Meanwhile, for the corner prediction module, the feature extraction network is sensitive to the size of a target, and the more accurate centripetal vector can be predicted by the network to improve the accuracy of corner pairing, so that the method is in boundary constraintThe attention module adds a boundary constraint vector prediction branch for predicting a group of boundary constraint vectors to represent the boundary size constraint of the target, the branch is used for forcing the feature extraction network to capture the scale information of the target, and the predicted boundary constraint (l)h,lw) Can be expressed as:
Figure BDA0003370617770000141
the training process of the helmet wearing detection system of the embodiment is as follows:
step 1, image collection: in the embodiment, a web crawler technology is adopted to retrieve and download construction scene images, in other embodiments, required images can be obtained directly by monitoring image data on a construction site, the images comprise personnel wearing safety helmets or without wearing safety helmets, and the images which do not meet training requirements are screened out: including single background, non-construction scene, advertisement propaganda picture, etc. In order to improve the generalization of the trained model, some difficult scenes such as small-scale personnel, night construction sites, people with shelters or dense people should be added appropriately.
Step 2, image annotation: and marking the position and the category of the personal safety helmet for each image. The marked area is the head area of the person, i.e. the area containing the whole safety helmet or no safety helmet. The marked information is the coordinates of the upper left corner and the lower right corner of the area, which are respectively expressed by (x)min,ymin),(xmax,ymax) Indicating whether the area is provided with safety helmet or not and the color of the safety helmet, namely, the area is divided into 5 categories (blue, red, yellow, white, no safety helmet). The label information is stored as an xml file, and the information is stored.
And 3, dividing a data set: the data are divided into data sets according to the proportion of 5:2.5:2.5 (training set: verification set: test set), wherein the training set is used for the back propagation optimization of network parameters, the verification set is used for selecting the optimal model parameter set from the results of multiple times of training, the test set is used for the final test of the model effect, and the divided data set information is respectively stored into 3 JSON files (namely MS COCO data set format).
Step 4, training the network: constructing a complete central attention centripetal network model, initializing model parameters, and using the modules comprising: the system comprises a feature extraction network, a corner point prediction module and a boundary constraint center attention module. And performing supervised training on the network by using the constructed training set, and optimizing network parameters according to back propagation until the network converges. (in the network training phase, the corner pairing and NMS post-processing of the network output results are not required). The step is repeated for a plurality of times to obtain a plurality of parameter sets for picking out the optimal results in the verification set.
Step 5, network testing: for the trained network, firstly, a boundary constraint center attention module is cut off, the average accuracy rate of the boundary constraint center attention module is obtained on a verification set, the parameter group with the highest average accuracy rate is selected as the finally obtained training result, and the generalization of the training result is tested on a test set.

Claims (10)

1. A safety helmet wearing detection method based on a central attention centripetal network is characterized by comprising the following steps:
s1, normalizing the size of the image to be detected;
s2, extracting features of the image to be detected to obtain a feature map, and performing offset prediction, corner position prediction and centripetal vector prediction on the feature map by using a corner prediction module to obtain an offset heat map, a corner position heat map and a centripetal vector heat map;
s3, carrying out local maximum value screening and TopK screening processing on the corner position heat map, filtering out redundant corner detection results, obtaining an upper left corner point coordinate set and a lower right corner point coordinate set, and the category and the confidence coefficient of each corner in each coordinate set, wherein the corner categories comprise: the head wears the safety helmet and does not wear the safety helmet;
s4, correcting the corner positions by using the offset heat map to obtain a corrected coordinate set of a top left corner point and a corrected coordinate set of a bottom right corner point;
s5, constructing a candidate detection frame through the corrected coordinate set of the upper left corner point and the coordinate set of the lower right corner point, and calculating a centripetal area of the candidate detection frame;
and S6, carrying out post-processing on the candidate detection frames to obtain a final detection frame, and judging the wearing condition of the safety helmet according to the types of the detection frames.
2. The method for detecting wearing of a helmet based on a central attention centripetal network according to claim 1, wherein the corner point categories of the helmet worn on the head comprise: the head wears a red safety helmet, the head wears a blue safety helmet, the head wears a white safety helmet and the head wears a yellow safety helmet.
3. The method for detecting the wearing of a helmet based on the central attention centripetal network as claimed in claim 1, wherein said feature extraction of step S2 is implemented by a feature extraction network, the feature extraction network has a down-sampling rate, and said correcting the angular position of step S4 comprises the steps of:
s41, retrieving an offset heat map by taking the x and y coordinates of each corner point in the upper left corner point coordinate set tl and the lower right corner point coordinate set br as indexes to acquire the offset of each corner point;
and S42, multiplying each corner coordinate by the down-sampling rate of the feature extraction network to map the corner position in the offset heat map back to the corresponding position of the input image, adding the coordinates and the offset to correct the precision loss in the down-sampling process, and obtaining a corrected upper left corner point coordinate set tl and a corrected lower right corner point coordinate set br.
4. The method for detecting wearing of a helmet based on a central attention centripetal network according to claim 3, wherein the method for constructing the candidate detection boxes in step S5 comprises:
s51, obtaining the corrected coordinate set tl of the upper left corner point and the coordinate set br of the lower right corner point, searching the centripetal vector heat map to obtain the centripetal vector of the current corner point by taking the x and y coordinates of each corner point in the coordinate set tl of the upper left corner point and the coordinate set br of the lower right corner point as indexes, and obtaining each centripetal vector of each upper left corner point in the coordinate set tl of the upper left corner pointAdding the point coordinates to the corresponding centripetal vectors to obtain target central coordinates (tl) of the upper left corner points corresponding to each upper left corner pointctx,tlcty) And similarly calculating to obtain the target central coordinate (br) of the lower right corner of each lower right cornerctx,brcty);
S52, combining the coordinates of the upper left corner in the upper left corner coordinate set tl with the coordinates of all the lower right corners in the lower right corner coordinate set br one by one, exhaustively pairing to obtain a pairing matrix as a candidate detection frame, wherein the pairing matrix corresponding to the pairing result of the ith upper left corner in the upper left corner coordinate set tl and the jth lower right corner in the lower right corner coordinate set br is bboxij=(tlx i,tly i,brx j,bry j) And each bbox is savedijCorresponding target center (tl)ctx i,tlcty i) And (br)ctx j,brcty j) The method is used for pairing screening, and the target confidence of the target center is calculated as follows:
scoreij=(tl_scorei+br_scorej)/2,
wherein tl _ scoreiBr _ score, confidence score of ith top left cornerjIs the confidence score of the j-th lower right corner point.
5. The method for detecting wearing of a safety helmet based on a central attention centripetal network according to claim 4, wherein the step S5 is to calculate the centripetal domain by: for all candidate detection frames, defining a centripetal region Rcentral ij=(ctlx ij,ctly ij,cbrx ij,cbry ij) Wherein ctlx ij,ctly ij,cbrx ij,cbry ijCalculated by the following formula:
Figure FDA0003370617760000021
Figure FDA0003370617760000022
Figure FDA0003370617760000023
Figure FDA0003370617760000024
where μ is the hyperparameter, ctlxijIs the abscissa of the upper left corner point of the centripetal region, ctlijAs the vertical coordinate of the upper left corner of the centripetal region, cbrxijAs the abscissa of the lower right corner point of the centripetal region, cbryijIs the vertical coordinate of the lower right corner point of the centripetal area.
6. The method for detecting the wearing of the safety helmet based on the central attention centripetal network according to claim 5, wherein the post-processing of step S6 comprises performing a filtering process on the candidate detection frames, the filtering process is a process of removing all impossible detection frames according to a filtering condition, the filtering condition comprises:
if the categories of the two angular points are not consistent, removing the detection frame, and judging the formula as follows:
tl_clsesi≠br_clsesj
wherein tl _ clsesiFor the category of the top left corner point of the detection box, br _ clsesjThe category of the lower right corner point of the detection frame;
if the upper left corner point is not positioned at the upper left of the lower right corner point, the detection frame is removed, and the judgment formula is as follows:
tlxi>brxj|tlyi>bryj
wherein (tlx)i,tlyi) As the coordinates of the upper left corner point of the detection box, (brx)j,bryj) Is the lower right of the detection frameCoordinates of the corner points;
if the predicted target center position is not in the centripetal area, removing the detection frame, and judging according to the formula:
Figure FDA0003370617760000031
Figure FDA0003370617760000032
Figure FDA0003370617760000033
Figure FDA0003370617760000034
wherein
Figure FDA0003370617760000035
Are all target center coordinates, (ctl)x ij,ctly ij,cbrx ij,cbry ij) Obtained from the centripetal zone.
7. The method according to claim 6, wherein the candidate detection frames are filtered and filtered, and then a Soft-NMS algorithm is used to remove the detection frames with the confidence level not meeting the preset condition from the overlapped detection frames.
8. A helmet wearing detection system for performing the helmet wearing detection method based on the central attention centripetal network according to any one of claims 1-7, wherein the helmet wearing detection system comprises: the device comprises a normalization module, a feature extraction network, an angular point prediction module, an angular point screening module, an angular point position correction module, a detection frame construction module and a post-processing module;
the normalization module normalizes the size of the image to be detected; the feature extraction network extracts features of an image to be detected to obtain a feature map, and the corner prediction module performs offset prediction, corner position prediction and centripetal vector prediction on the feature map to obtain an offset heat map, a corner position heat map and a centripetal vector heat map; the corner screening module performs local maximum screening and TopK screening processing on the corner position heat map, filters redundant corner detection results, obtains an upper left corner point coordinate set and a lower right corner point coordinate set, and a category and a confidence of each corner in each coordinate set, wherein the corner categories comprise: the head wears the safety helmet and does not wear the safety helmet; the corner point position correction module corrects the corner point position by using the offset heat map to obtain a corrected coordinate set of a left upper corner point and a corrected coordinate set of a right lower corner point; the detection frame construction module constructs a candidate detection frame through the corrected coordinate set of the upper left corner point and the coordinate set of the lower right corner point, and calculates a centripetal area of the candidate detection frame; and the post-processing module performs post-processing on the detection frame to obtain a final detection frame, and judges the wearing condition of the safety helmet according to the type of the detection frame.
9. The headgear wear detection system of claim 8, wherein during training of the headgear wear detection system, the headgear wear detection system further comprises a boundary-constrained central attention module comprising: the device comprises a central pooling layer, an offset prediction module, a central point position prediction module and a boundary constraint vector prediction module;
the central pooling layer acquires a feature map output by the feature extraction network, the boundary constraint central attention module acquires the internal feature information with the greatest discrimination through the central pooling layer to force the feature extraction network to learn the capability of extracting the critical internal information, and the target to be detected is transmitted to the offset prediction module, the central point position prediction module and the boundary constraint vector prediction module through the central pooling layer respectively, and the central point position prediction module predicts the central point coordinates and the corresponding central point coordinatesConfidence score, offset prediction module predicting the offset to make the location more accurate, boundary constraint vector prediction module predicting a set of boundary constraint vectors to represent the boundary size constraint of the target to force feature extraction network to capture the scale information of the target, the boundary constraint (l)h,lw) Expressed as:
Figure FDA0003370617760000041
where s is the down-sampling rate, (tl)x,tly) As the coordinates of the upper left corner, (br)x,bry) The coordinates of the lower right corner point.
10. The headgear wear detection system of claim 8, wherein the corner prediction module comprises: the device comprises a vertical-horizontal corner pooling layer, an offset prediction module, a corner position prediction module and a centripetal vector prediction module;
the method comprises the steps that a vertical-horizontal corner pooling layer obtains a feature map output by a feature extraction network, and the vertical-horizontal corner pooling layer uses a convolution block consisting of a 3x3 convolution layer, a BN layer and a ReLU activation function layer to respectively extract features of an input feature map in a target, a target horizontal edge and a corner position; focusing the maximum value of the internal feature of the target at the position of a horizontal edge by means of vertical pooling operation, adding the maximum value of the internal feature of the target to the feature value of the position, and then focusing the new maximum value of the feature at the position of an angular point by means of horizontal pooling operation and adding the new maximum value of the feature to the local feature of the angular point; the features output by the vertical-horizontal corner pooling layer pass through an offset prediction module, a corner position prediction module and a centripetal vector prediction module to obtain an offset heat map, a corner position heat map and a centripetal vector heat map respectively.
CN202111397722.1A 2021-11-23 2021-11-23 Safety helmet wearing detection method and system based on central attention centripetal network Pending CN114067365A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111397722.1A CN114067365A (en) 2021-11-23 2021-11-23 Safety helmet wearing detection method and system based on central attention centripetal network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111397722.1A CN114067365A (en) 2021-11-23 2021-11-23 Safety helmet wearing detection method and system based on central attention centripetal network

Publications (1)

Publication Number Publication Date
CN114067365A true CN114067365A (en) 2022-02-18

Family

ID=80275556

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111397722.1A Pending CN114067365A (en) 2021-11-23 2021-11-23 Safety helmet wearing detection method and system based on central attention centripetal network

Country Status (1)

Country Link
CN (1) CN114067365A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117726991A (en) * 2024-02-07 2024-03-19 金钱猫科技股份有限公司 High-altitude hanging basket safety belt detection method and terminal

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117726991A (en) * 2024-02-07 2024-03-19 金钱猫科技股份有限公司 High-altitude hanging basket safety belt detection method and terminal

Similar Documents

Publication Publication Date Title
Fang et al. Detecting non-hardhat-use by a deep learning method from far-field surveillance videos
CN108898085B (en) Intelligent road disease detection method based on mobile phone video
CN111898514B (en) Multi-target visual supervision method based on target detection and action recognition
CN110543867A (en) crowd density estimation system and method under condition of multiple cameras
CN104063722B (en) A kind of detection of fusion HOG human body targets and the safety cap recognition methods of SVM classifier
CN110414400B (en) Automatic detection method and system for wearing of safety helmet on construction site
CN110728252B (en) Face detection method applied to regional personnel motion trail monitoring
CN109145756A (en) Object detection method based on machine vision and deep learning
CN113139437B (en) Helmet wearing inspection method based on YOLOv3 algorithm
CN111062303A (en) Image processing method, system and computer storage medium
CN112434827B (en) Safety protection recognition unit in 5T operation and maintenance
CN112070043A (en) Safety helmet wearing convolutional network based on feature fusion, training and detecting method
CN113743256A (en) Construction site safety intelligent early warning method and device
CN110569843A (en) Intelligent detection and identification method for mine target
CN112183472A (en) Method for detecting whether test field personnel wear work clothes or not based on improved RetinaNet
CN116259002A (en) Human body dangerous behavior analysis method based on video
CN113807240A (en) Intelligent transformer substation personnel dressing monitoring method based on uncooperative face recognition
CN103500330A (en) Semi-supervised human detection method based on multi-sensor and multi-feature fusion
CN114067365A (en) Safety helmet wearing detection method and system based on central attention centripetal network
Wang et al. A safety helmet and protective clothing detection method based on improved-yolo v 3
CN117423157A (en) Mine abnormal video action understanding method combining migration learning and regional invasion
CN111414825B (en) Method for detecting wearing of safety helmet
CN113537019A (en) Detection method for identifying wearing of safety helmet of transformer substation personnel based on key points
CN113920469A (en) Wearing detection method for safety helmet
CN113076825A (en) Transformer substation worker climbing safety monitoring method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination