CN111401278A - Helmet identification method and device, electronic equipment and storage medium - Google Patents

Helmet identification method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN111401278A
CN111401278A CN202010203042.0A CN202010203042A CN111401278A CN 111401278 A CN111401278 A CN 111401278A CN 202010203042 A CN202010203042 A CN 202010203042A CN 111401278 A CN111401278 A CN 111401278A
Authority
CN
China
Prior art keywords
image
head
feature
map
fusion
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010203042.0A
Other languages
Chinese (zh)
Inventor
贾挺猛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing Unisinsight Technology Co Ltd
Original Assignee
Chongqing Unisinsight Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing Unisinsight Technology Co Ltd filed Critical Chongqing Unisinsight Technology Co Ltd
Priority to CN202010203042.0A priority Critical patent/CN111401278A/en
Publication of CN111401278A publication Critical patent/CN111401278A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Abstract

The embodiment of the invention relates to the field of image identification, and provides a safety helmet identification method, a safety helmet identification device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring an image to be identified; inputting an image to be recognized into a pre-trained human head recognition model, and recognizing the human head of the image to be recognized by using the human head recognition model to obtain the position of the human head in the image to be recognized; obtaining a human head area image in the image to be identified according to the human head position; and inputting the head area diagram into a pre-trained safety helmet identification model, and carrying out safety helmet identification on the head area diagram by using the safety helmet identification model to obtain an identification result of whether the head in the head area diagram is worn with a safety helmet. The embodiment of the invention improves the identification accuracy of the safety helmet.

Description

Helmet identification method and device, electronic equipment and storage medium
Technical Field
The invention relates to the technical field of image recognition, in particular to a safety helmet recognition method and device, electronic equipment and a storage medium.
Background
With the continuous progress of society and the rapid expansion of cities, people have stronger and stronger requirements on safe operation. In many special construction industries such as building, coal, metallurgy, petrochemical industry, electric power and the like, and in traffic industries such as non-motor vehicle safety driving detection and the like, the problems of personal defense shortage and weak protection consciousness of related personnel still exist. In the industries, whether related personnel wear safety helmets or not needs to be detected, and accidents are avoided by means of video supervision, voice reminding and the like. Therefore, intelligent analysis by adopting videos becomes a preferred method for identifying the safety helmet at present. The existing helmet identification method is low in identification accuracy.
Disclosure of Invention
The invention aims to provide a safety helmet identification method, a safety helmet identification device, electronic equipment and a storage medium, which can improve the identification accuracy of a safety helmet.
In order to achieve the above purpose, the embodiment of the present invention adopts the following technical solutions:
in a first aspect, the present embodiment provides a method for identifying a safety helmet, the method including: acquiring an image to be identified; inputting an image to be recognized into a pre-trained human head recognition model, and recognizing the human head of the image to be recognized by using the human head recognition model to obtain the position of the human head in the image to be recognized; obtaining a human head area image in the image to be identified according to the human head position; and inputting the head area diagram into a pre-trained safety helmet identification model, and carrying out safety helmet identification on the head area diagram by using the safety helmet identification model to obtain an identification result of whether the head in the head area diagram is worn with a safety helmet.
In a second aspect, the present embodiment provides a device for identifying a safety helmet, where the device includes an obtaining module, a head identification module, a head region determining module, and a safety helmet identification module, where the obtaining module is configured to obtain an image to be identified; the human head recognition module is used for inputting the image to be recognized into a pre-trained human head recognition model, and performing human head recognition on the image to be recognized by using the human head recognition model to obtain the position of the human head in the image to be recognized; the head area determining module is used for obtaining a head area image in the image to be identified according to the position of the head; and the safety helmet identification module is used for inputting the head area diagram into a safety helmet identification model trained in advance, and utilizing the safety helmet identification model to identify the head area diagram so as to obtain an identification result of whether the head in the head area diagram is worn by the safety helmet.
In a third aspect, the present embodiment provides an electronic device, including: one or more processors; memory for storing one or more programs that, when executed by the one or more processors, cause the one or more processors to implement a headgear identification method as in any one of the preceding embodiments.
In a fourth aspect, the present embodiment provides a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method of identifying headgear as in any of the preceding embodiments.
Compared with the prior art, the embodiment of the invention provides a safety helmet identification method, a safety helmet identification device, electronic equipment and a storage medium, wherein the head position is identified firstly, and then the identification result of whether the head wears a safety helmet is further identified based on the head position, so that the interference caused by a complex background in an image to be identified is reduced, and the identification accuracy of the safety helmet is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 shows a flowchart of a method for identifying a safety helmet according to an embodiment of the present invention.
Fig. 2 is a diagram illustrating an exemplary structure of a human head recognition model according to an embodiment of the present invention.
Fig. 3 is a flow chart illustrating another method for identifying a safety helmet according to an embodiment of the present invention.
Fig. 4 shows a diagram illustrating a structure of a converged network provided by an embodiment of the present invention.
Fig. 5 is a flow chart illustrating another method for identifying a safety helmet according to an embodiment of the present invention.
Fig. 6 is a diagram illustrating an exemplary structure of a helmet identification model according to an embodiment of the present invention.
Fig. 7 is a flow chart illustrating another method for identifying a safety helmet according to an embodiment of the present invention.
Fig. 8 is a schematic structural diagram of a helmet identification device according to an embodiment of the present invention.
Fig. 9 shows a schematic structural diagram of an electronic device provided in an embodiment of the present invention.
Icon: 10-an electronic device; 11-a memory; 12-a communication interface; 13-a processor; 14-a bus; 100-helmet identification means; 110-an obtaining module; 120-head identification module; 130-head region determination module; 140-helmet identification module.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
In the description of the present invention, it should be noted that if the terms "upper", "lower", "inside", "outside", etc. indicate an orientation or a positional relationship based on that shown in the drawings or that the product of the present invention is used as it is, this is only for convenience of description and simplification of the description, and it does not indicate or imply that the device or the element referred to must have a specific orientation, be constructed in a specific orientation, and be operated, and thus should not be construed as limiting the present invention.
Furthermore, the appearances of the terms "first," "second," and the like, if any, are used solely to distinguish one from another and are not to be construed as indicating or implying relative importance.
It should be noted that the features of the embodiments of the present invention may be combined with each other without conflict.
At present, a recognition method of a safety helmet is generally based on a classical image processing method, namely, aiming at targets such as a human body, the safety helmet and the like, logic judgment is carried out to judge whether the safety helmet is worn or not according to characteristics such as shape, texture, color and the like. In the method, due to the diversity and complexity of the target background, a target feature database required to be established is large, and the calculated amount of an image algorithm is correspondingly large, so that the more safety helmet types are, the poorer the robustness of the algorithm is.
In order to maintain higher robustness in helmet identification, a deep learning method is generally adopted, namely, a human body position or a human face position in an image is detected firstly, and then secondary helmet-wearing attribute identification is carried out. Compared with the method based on the classical image processing, the method has better robustness. The applicant intensively studied the method and found that: if the user first detects the position of the human body and then recognizes the property of the hat, due to many differences in posture, clothing, background and the like, the problems that the position of the human body is not accurate, the human body in a local part cannot be recognized, the human body in the lower part is mistakenly recognized as the human body in the whole body and the like still exist, so that the area where the user wears the hat cannot be determined, and the type of the hat cannot be accurately recognized. If the position of the face is detected firstly and then the hat-wearing property is recognized, the face cannot be detected for the scene on the back of the human body, so that the hat-wearing area of the head cannot be determined, and the type of the hat cannot be recognized accurately.
In view of the above, the applicant has proposed a helmet identification method, apparatus, electronic device and storage medium to improve the identification accuracy of a helmet based on an intensive study on the cause of the occurrence of the defects of the above method, which will be described in detail below.
Referring to fig. 1, fig. 1 is a flowchart illustrating a method for identifying a safety helmet according to an embodiment of the present invention, where the method includes the following steps:
and step S101, acquiring an image to be identified.
In this embodiment, the image to be recognized may be obtained by preprocessing an original image, where the original image may be obtained from a picture taken by a shooting device, or may be extracted from a video taken by the shooting device or a video stream monitored by a monitoring device, for example, in an image frame of a video, one frame is extracted every preset number of frames to serve as an original image.
In the present embodiment, the pre-processing of the original image includes, but is not limited to, image denoising, size scaling, color enhancement, and the like.
And S102, inputting the image to be recognized into a pre-trained human head recognition model, and recognizing the human head of the image to be recognized by using the human head recognition model to obtain the position of the human head in the image to be recognized.
In this embodiment, when a human head exists in an image to be recognized, a human head position in the image to be recognized may be obtained by using a human head recognition model, where the human head position may be coordinates of an opposite-to-corner point of a rectangular region of the human head in the image to be recognized, or may be coordinates of a center point of the rectangular region of the human head in the image to be recognized and a height and a width of the rectangular region, or may be other preset geometric shapes and coordinate points that can determine the geometric shapes, and when the human head does not exist in the image to be recognized, a preset coordinate value may be used for representing, for example, when the coordinates of the opposite-to-corner point obtained by the human head recognition model are (0, 0) and (0, 0), it may be determined that the human head does not exist in the image to be recognized.
And step S103, obtaining a human head area image in the image to be recognized according to the human head position.
In this embodiment, the region determined by the head position may be used as the head region map, or an expanded region obtained by expanding the region determined by the head position by a certain ratio may be used as the head region map.
And step S104, inputting the head area diagram into a pre-trained safety helmet identification model, and carrying out safety helmet identification on the head area diagram by using the safety helmet identification model to obtain an identification result of whether the head in the head area diagram is worn with a safety helmet.
In this embodiment, according to different types of the training data tags of the helmet identification model, the identification result may include whether the helmet is worn on the head in the head area diagram, and may further include whether the helmet is worn on the head, whether the helmet worn on the head is a helmet, or the type of the helmet worn on the head.
According to the method for identifying the safety helmet, provided by the embodiment of the invention, the identification result of whether the safety helmet is worn on the head of the person at the head position can be further identified by identifying the head position firstly and then based on the head position, on one hand, the background interference of the image below the head and the shoulder can be filtered in a large range, the image area where the safety helmet is located can be more accurately obtained, and the accuracy of safety helmet identification is effectively improved. On the other hand, the problem that the upper half of the human body cannot be detected and the helmet cannot be recognized in the human body posture difference and the human body shielding scene is solved, finally, the interference caused by the complex background in the image to be recognized is reduced, and the recognition accuracy of the helmet is improved.
On the basis of fig. 1, the embodiment of the present invention provides a structure of a specific human head recognition model, and an implementation method for performing human head recognition on an image to be recognized by using the human head recognition model to obtain a human head position in the image to be recognized, wherein the human head recognition model comprises a first feature extraction network, a fusion network, a detector and a post-processing module, wherein the number of the detectors is the same as the number of feature maps output by the feature extraction network, the image to be recognized is input into the first feature extraction network for feature extraction to obtain a plurality of output feature maps, the plurality of output feature maps are input into the fusion network module for feature fusion of a plurality of different scales to obtain a feature fusion map corresponding to each output feature map, each feature fusion map is respectively input into the corresponding detector for target detection, and all detection results are input into the post-processing module, and finally obtaining the head target position with the confidence coefficient larger than the preset score threshold value through screening and filtering.
Referring to fig. 2, fig. 2 is a diagram illustrating an exemplary structure of a human head recognition model according to an embodiment of the present invention, where the first feature extraction network in fig. 2 outputs 3 output feature maps: the output characteristic diagram 1, the output characteristic diagram 2 and the output characteristic diagram 3, each output characteristic diagram corresponds to one detector, and the total number of the output characteristic diagrams includes 3 detectors: detector 1, detector 2 and detector 3. Inputting an image to be recognized into a first feature extraction network for feature extraction to obtain 3 output feature graphs with sequentially increasing sizes: outputting the characteristic diagram 1, the characteristic diagram 2 and the characteristic diagram 3, and inputting the 3 output characteristic diagrams into the fusion network in sequence to obtain the characteristic fusion diagrams corresponding to the 3 output characteristic diagrams: and the feature fusion images 1, 2 and 3 are respectively input into the detector 1, the detector 2 and the detector 3, and the detection results of the detector 1, the detector 2 and the detector 3 are input into a post-processing module for processing, so as to finally obtain the head position in the image to be recognized.
It should be noted that fig. 2 is only an exemplary diagram of a specific head recognition model, and does not represent that the head recognition model is limited to 3 output feature maps, in an actual usage scenario, the number of output feature maps may be multiple, and in this case, there is also a detector corresponding to each output feature map.
Referring to fig. 3, fig. 3 is a flowchart illustrating another method for identifying a safety helmet according to an embodiment of the present invention, where step S102 includes the following sub-steps:
and a substep S1021, performing feature extraction on the image to be recognized by using a first feature extraction network to obtain a plurality of output feature maps with sequentially increasing sizes.
In this embodiment, the first feature extraction network may be a deep learning network ResNet, such as a ResNet50 network or a ResNet101 network. The first feature extraction network may include multiple convolution layers and multiple pooling layers, where the number of convolutions is different, and the sizes of the obtained output feature maps are also different, and in this embodiment, the output of the first feature extraction network includes multiple output feature maps whose sizes are sequentially increased, for example, the number of output feature maps is 3, and the output feature maps are sequentially increased by size and respectively: 30 x 16, 60 x 32, 120 x 64.
And a substep S1022, sequentially inputting the plurality of output feature maps into the fusion network in the descending order, and performing feature fusion by using the fusion network to obtain a feature fusion map corresponding to each output feature map.
In this embodiment, the output feature maps are sequentially fused from the smallest output feature map to obtain corresponding feature fusion maps, and as a specific embodiment, the method for performing feature fusion on a plurality of output feature maps by using a fusion network may be:
first, the minimum output feature map is used as the initial feature map.
And secondly, performing convolution processing on the convolution block to obtain a first intermediate feature map of the initial feature map, wherein the convolution block comprises a plurality of convolution layers and pooling layers with the same parameters.
In this embodiment, the parameters of the convolutional layer include, but are not limited to, convolution kernel, step size, edge extension, etc., and the parameters of the pooling layer include, but are not limited to, pooling, such as maximum pooling, average pooling, etc.
And thirdly, performing convolution processing on the first intermediate feature map to obtain a feature fusion map corresponding to the initial feature map.
In this embodiment, the smallest output feature map is convolved with the smallest output feature map by a convolution block to obtain a first intermediate feature map, and the feature fusion map of the smallest output feature map is obtained by convolving the first intermediate feature map.
And fourthly, sequentially performing convolution and up-sampling treatment on the first intermediate characteristic diagram to obtain a second intermediate characteristic diagram corresponding to the initial characteristic diagram.
In this embodiment, the upsampling may be implemented by means of deconvolution.
And fifthly, fusing the second intermediate feature map and the target output feature map to obtain an initial fusion map corresponding to the initial feature map, wherein the target output feature map is a subsequent feature map continuous with the initial feature map.
And finally, replacing the initial characteristic diagram with the initial fusion diagram corresponding to the initial characteristic diagram, and repeatedly executing the steps until the characteristic fusion diagram corresponding to each output characteristic diagram is obtained.
In this embodiment, for other output feature maps except the smallest output feature map, it is necessary to first fuse the output feature map with the second intermediate feature map of the previous output feature map (i.e., the second intermediate feature map obtained by sequentially performing convolution and upsampling on the first intermediate feature map of the previous output feature map) to obtain an initial fused map of the output feature map, perform convolution processing on the initial fused map by a convolution block to obtain a first intermediate feature map of the initial fused map, and perform convolution processing on the first intermediate feature map of the initial fused map to obtain a feature fused map corresponding to the output feature map.
To more clearly illustrate the convergence network and the convergence process, please refer to fig. 4, where fig. 4 shows an exemplary diagram of a structure of the convergence network provided by the embodiment of the present invention, and the input of the convergence network is 3 output characteristic diagrams with gradually increasing sizes: and outputting the feature maps 1 to 3, wherein the output is a feature fusion map corresponding to each output feature map, and the feature fusion map 1 to 3. The smallest output feature map 1 is convolved by the convolution block to obtain a first intermediate feature map of the output feature map 1, and the first intermediate feature map of the output feature map 1 is further convolved to obtain a feature fusion map of the output feature map 1. Then, convolution and upsampling processing are performed on the first intermediate feature map of the output feature map 1 to obtain a second intermediate feature of the output feature map 1, the second intermediate feature of the output feature map 1 is fused with the output feature map 2 to obtain an initial fusion map of the output feature map 2, the initial fusion map of the output feature map 2 is output, the initial fusion map of the output feature map 2 is subjected to convolution processing of a convolution block to obtain a first intermediate feature map of the initial fusion map of the output feature map 2, the first intermediate feature map of the initial fusion map of the output feature map 2 is further subjected to convolution to obtain a feature fusion map of the output feature map 2, and the like to obtain a feature fusion map of the output feature map 3.
It should be noted that fig. 4 is only an exemplary diagram of a specific structure of a fusion network, in an actual application scenario, multiple output feature diagrams may be used as inputs of the fusion network, the fusion network may have multiple convolution blocks, parameters of any two convolution blocks may be the same or different, the fusion network may also have multiple convolution layers, parameters of any two convolution layers may be the same or different, the fusion network may also have multiple upsampling, parameters of any two upsampling may be the same or different, and this is not limited in the embodiment of the present invention.
And a substep S1023 of performing target detection on each feature fusion map by using the detector corresponding to each output feature map to obtain the position of the human head target frame in each feature fusion map and the confidence of the corresponding human head target frame.
In this embodiment, the detector is configured to detect a position of a human head target frame in a corresponding feature fusion map and a confidence of the corresponding human head target frame, and as a specific implementation manner, the detector may be implemented by using a yolo layer.
And a substep S1024 of processing the positions of the human head target frames in all the feature fusion images by using a post-processing module according to the confidence degrees of the human head target frames in all the feature fusion images, and taking the position in the image to be recognized corresponding to the position of the human head target frame in the feature fusion images with the processed confidence degrees larger than a preset threshold value as the human head position in the image to be recognized.
In this embodiment, the confidence of the human head target frame in the feature fusion map is used to characterize the probability index of whether the recognized human head target frame is a human head.
In this embodiment, the processing of the positions of the human head target frames in all the feature fusion maps includes sorting, merging and screening the positions of the human head target frames in the feature fusion maps according to the confidence degrees of the human head target frames in all the feature fusion maps, and obtaining the position in the image to be recognized corresponding to the position of the human head target frame in the feature fusion map with the processed confidence degree being greater than the preset threshold value as the human head position in the image to be recognized.
In this embodiment, in order to obtain a trained human head recognition model, the constructed human head recognition model needs to be trained, and the specific training method may be:
firstly, human body pictures are collected and the position of a human head is marked.
In this embodiment, the human body picture is a picture including various human body scenes, and a rectangular frame is used to select a position range of a human head and manufacture a coordinate tag file. The label recording format is a head label 0, a head center position abscissa cx, a head center position abscissa cy, a head width w and a head height h.
Secondly, the human body picture is subjected to image enhancement.
Because the target scene is changeable, the human body picture containing the human head has differences of background, scale, color and the like, and various image enhancement processing such as image scaling, random noise, random color and the like is carried out on the marked picture.
As a specific implementation, the following enhancement processing can be performed on a human body picture:
(1) and randomly selecting 40% of human body pictures to add image noise enhancement.
(2) Randomly selecting 20% of human body pictures to perform multi-scale scaling resize, and controlling the scaling scale to be 0.8-1.2 by taking the original size of the input image of the human head recognition model as a reference.
(3) And randomly selecting 20% of human body pictures to enhance the image color, including image brightness, contrast and the like.
The proportion of the random selection of 40% and 20% and the scaling proportion of 0.8-1.2 can be set according to actual needs.
And thirdly, converting the processed human body picture into a training sample.
In this embodiment, a set of input sizes with different scales is designed in advance, and the input sizes are set as the input of the human head recognition model in a random manner, so as to improve the generalization capability of the model. For example, a set of input image scale parameter sets containing n different sizes is defined as size _ list, denoted as: size _ list ═ s1,s2,...,sn]Wherein s isiFor the ith input size, the random scale equivalent is s, s-size _ list [ rand (0, n)]Wherein rand is a random function, solving for the model image input generated randomlyThe width dim _ w and the height dim _ h are as follows:
dim_w=s,dim_h=s
and for any human body picture, randomly selecting a scale equivalent s from the size _ list, and carrying out corresponding scaling processing on the human body picture according to s to obtain a training sample.
Fourth, an anchor is set for each detector to facilitate mapping to the head position in the corresponding training sample based on the position of the head target box obtained by the detector.
In this embodiment, each detector may correspond to 3 anchors, and the anchor size of each detector may be obtained according to a training image clustering method for an actual application scene, so that the model prediction capability for the actual scene may be improved.
And fifthly, inputting the training sample into the human head recognition model for training until the human head recognition model which is trained is obtained.
In this embodiment, an initial learning rate is set to 0.001, a learning strategy is set to a poly strategy, a training sample is input into a human head recognition model for processing, the processing procedure is as in step S102 and substeps S1021 to S1024, a recognized human head position is finally output, the recognized human head position is compared with a position labeled in advance in the training sample, and parameters (such as parameters of a convolution kernel, a step length and the like) of the human head recognition model are continuously adjusted until iteration times or preset conditions are met, so that a trained human head recognition model is finally obtained. For example, iterate 100 rounds of epochs.
According to the helmet identification method provided by the embodiment of the invention, the problem that the detection of a small far head in an actual application scene is not accurate enough is solved by constructing the head identification model and adopting a method of sampling and fusing deep characteristic maps with different scales and a shallow characteristic map, the robustness of a head detection algorithm for a complex scene is improved, and the accuracy of helmet identification is effectively improved.
On the basis of fig. 1, in order to reduce the deviation of the head position recognized by the head recognition model, another helmet recognition method is further provided in the embodiment of the present invention, please refer to fig. 5, fig. 5 shows a flowchart of another helmet recognition method provided in the embodiment of the present invention, and step S103 includes the following sub-steps:
and step S1031, taking a rectangular area in the image to be recognized, which takes the first coordinate and the second coordinate as opposite angles, as an initial area map.
In this embodiment, the head position includes a first coordinate and a second coordinate, and the first coordinate and the second coordinate are used to determine a corresponding area in the image to be recognized. For example, when the region corresponding to the head position is a rectangle, the first coordinate and the second coordinate are coordinates of the rectangle that are diagonal to each other.
Step S1032, the initial area map is expanded by a preset proportion, and a head area map is obtained.
In this embodiment, the first coordinate and the second coordinate of the expanded initial area map (i.e., the head area map) are obtained according to the first coordinate and the second coordinate of the initial area map according to the preset ratio. For example, when the initial region map is a rectangle, the first coordinate of the upper left corner of the initial region map is (X1, Y1), the second coordinate of the lower right corner of the initial region map is (X2, Y2), and the preset proportion is 20%, that is, the head region map is obtained by expanding the size of 20% upward, downward, left, and right of the initial region map. The first coordinate of the upper left corner of the head region map is (X3, Y3), and the second coordinate of the lower right corner of the head region map is (X4, Y4) then:
X3=X1–(X2-X1)*0.2;
Y3=Y1–(Y2-Y1)*0.2;
X4=X2+(X2-X1)*0.2;
Y4=Y2+(Y2-Y1)*0.2。
according to the safety helmet identification method provided by the embodiment of the invention, the initial area image corresponding to the head position identified by the head identification model is expanded, and the obtained head area image eliminates information such as a human body, a background and the like, so that on one hand, the interference of invalid background information is eliminated, and the safety helmet identification accuracy can be greatly improved; on the other hand, the head region diagram is not limited by the categories of human bodies and scenes, is easy to collect and label, can save the labor cost of data labeling, and effectively solves the problem of the final recognition accuracy of the safety helmet caused by the deviation of the head recognition model in recognizing the head position.
On the basis of fig. 1, an embodiment of the present invention provides a specific structure of a safety helmet identification model and an implementation method for performing safety helmet identification by using the safety helmet identification model, where the safety helmet identification model includes a second feature extraction network, a pooling layer, a full connection layer, and an activation layer. Inputting the human head region map into a second feature extraction network for image feature extraction, performing maximum pooling processing on the obtained image feature map by using a pooling layer to obtain a pooled feature map, performing dimension reduction on the pooled feature map by using a full connection layer to obtain a one-dimensional vector, inputting the one-dimensional vector into an activation layer to obtain a probability value of wearing a safety helmet on the human head, and obtaining an identification result of whether the human head in the human head region map is worn with the safety helmet according to the probability value.
Referring to fig. 6, fig. 6 is a diagram illustrating an exemplary structure of a helmet identification model according to an embodiment of the present invention.
Referring to fig. 7, fig. 7 is a flowchart illustrating another method for identifying a safety helmet according to an embodiment of the present invention, where step S104 includes the following sub-steps:
and a substep S1041 of inputting the human head region diagram into a second feature extraction network for image feature extraction to obtain an image feature diagram.
In the present embodiment, the image features include, but are not limited to, texture, color, edge contour, and the like.
And a substep S1042 of performing maximum pooling on the image feature map by using the pooling layer to obtain a pooled feature map.
In this embodiment, the pooling layer mainly has the following functions: (1) main characteristics are kept, and parameters and calculated amount of a next layer are reduced to prevent overfitting; (2) some invariance is maintained, including translation, rotation, scale. Common pooling treatments are maximum pooling (max-pooling) and mean-pooling (mean-pooling). Maximum pooling refers to taking the maximum value for a small area, e.g., a pooling window size of 2 x2, and the areas that need pooling are as follows:
1 1 1 0
2 3 3 1
2 3 2 1
1 2 2 1
taking 2 × 2 as a unit area, taking the maximum value in each unit area, and obtaining the pooled areas as follows:
3 3
3 2
as a specific implementation manner, the embodiment adopts the maximum pooling, so that the training parameter quantity and the calculation quantity can be reduced, and the strongest semantic information in the features can be extracted.
And a substep S1043 of reducing the dimension of the pooling feature map by using the full connection layer to obtain a one-dimensional vector.
And a substep S1044 of inputting the one-dimensional vector into the activation layer to obtain a probability value of the helmet worn on the head of the person, and obtaining an identification result of whether the head of the person in the head area diagram is worn with the helmet according to the probability value.
In this embodiment, the classification result is related to the labeling mode and the corresponding settings of the full connection layer and the activation layer when the helmet recognition model is trained, for example, the labeling mode during training includes two types: if the user wears the hat in the range of 0-without the hat and the user wears the hat in the range of 1-with the hat, the classification result is a probability value of wearing the hat, for example, the output probability value is 0.2, the recognition result can be considered as not wearing the hat, the output probability value is 0.8, and the recognition result can be considered as wearing the hat. For another example, during training, the labels are classified into three types: 0-without cap, 1-with common cap, 2-with safety cap, the classification result is: probability values of wearing a cap, wearing a common cap and wearing a safety cap.
In this embodiment, in order to obtain the trained helmet identification model, the constructed helmet identification model needs to be trained, and the specific training method may be:
firstly, collecting the pictures of the head wearing the cap and marking the type of the cap wearing.
In this embodiment, the person's head-on picture can be collected from two ways: (1) capturing data of the head of the open source person wearing the cap on the network; (2) and outputting a human head area image obtained by the human head position according to the human head recognition model. The pictures with the hat on the head are classified into a plurality of classes, for example, three classes, and different labels are printed: 0-without cap, 1-with common cap, 2-with safety cap. The proportion of the three different types of human headwear pictures can be set, for example, the proportion of the three types of human headwear pictures is set to be 3:2: 1.
Secondly, the image enhancement is carried out on the picture of the hat worn by the person.
Optionally, the following random enhancement processing is performed on the picture of the person wearing the hat:
(1) randomly selecting 40% of the pictures with caps on the heads adds image noise enhancement.
(2) Randomly selecting 20% of pictures with the caps worn on the heads of people to carry out multi-scale scaling resize, and controlling the scaling scale to be 0.8-1.2 by taking the original size of an input image of the safety cap identification model as a reference.
(3) And randomly selecting 20% of pictures with caps on the heads of people for image color enhancement, including image brightness, contrast and the like.
(4) Randomly selecting 20% of pictures with caps on the heads of people for image cutting, wherein the cutting proportion is randomly 5% -10%.
The proportion of the random selection of 40% and 20%, the scaling proportion of 0.8-1.2, and the random clipping proportion of 5% -10% can be set according to actual needs.
And thirdly, converting the processed picture of the head on the human head into a training sample.
And fourthly, inputting the training samples into the safety helmet identification model for training until the trained safety helmet identification model is obtained.
In this embodiment, an initial learning rate is set to 0.001, a learning strategy is set to a step strategy, a training sample is input into a helmet identification model for processing, the processing procedure is as in step S104 and substeps S1041 to S1044, a probability value of wearing a helmet on a head is finally output, the probability value of wearing the helmet on the head is compared with a pre-labeled class, an error between an identification result and an actual value is calculated by using a model loss function and using an Euclidean distance of L oss, and a model weight coefficient is continuously updated until an iteration number or a preset condition is met, so that a trained helmet identification model is finally obtained, for example, an iteration number of times of epoch is 150.
According to the method for identifying the safety helmet, provided by the embodiment of the invention, the identification result of whether the head position in the image to be identified is covered or not and whether the safety helmet is covered or not can be obtained by constructing the safety helmet identification model.
It should be noted that further processing may be performed according to the recognition result, for example, an alarm message may be issued when the recognition result is that the helmet is not worn.
S1021 to S1024 may replace step S102 in fig. 1, 5, and 7, S1031 to S1032 may replace step S103 in fig. 1, 3, and 7, and S1041 to S1044 may replace step S104 in fig. 1, 3, and 5.
In order to execute the corresponding steps in the above embodiments and various possible implementations, an implementation of the helmet identification device is provided below, please refer to fig. 8, and fig. 8 shows a schematic structural diagram of a helmet identification device 100 according to an embodiment of the present invention. It should be noted that the basic principle and the technical effects of the helmet identification device 100 provided in the present embodiment are the same as those of the above embodiments, and for the sake of brief description, no mention is made in this embodiment, and reference may be made to the corresponding contents in the above embodiments.
The helmet identification apparatus 100 includes an obtaining module 110, a head identification module 120, a head region determining module 130, and a helmet identification module 140.
The obtaining module 110 is configured to obtain an image to be identified.
And the human head recognition module 120 is configured to input the image to be recognized into a pre-trained human head recognition model, and perform human head recognition on the image to be recognized by using the human head recognition model to obtain a human head position in the image to be recognized. Specifically, the human head recognition model includes a first feature extraction network, a fusion network, detectors, and a post-processing module, where the number of the detectors is the same as the number of feature maps output by the feature extraction network, and the human head recognition module 120 is specifically configured to: performing feature extraction on an image to be recognized by using a first feature extraction network to obtain a plurality of output feature maps with sequentially increasing sizes; sequentially inputting the output feature maps into a fusion network according to the sequence from small to large, and performing feature fusion by using the fusion network to obtain a feature fusion map corresponding to each output feature map; performing target detection on each feature fusion image by using a detector corresponding to each output feature image to obtain the position of a human head target frame in each feature fusion image and the confidence of the corresponding human head target frame; and processing the positions of the human head target frames in all the feature fusion images by utilizing a post-processing module according to the confidence degrees of the human head target frames in all the feature fusion images, and taking the position in the image to be recognized corresponding to the position of the human head target frame in the feature fusion image with the processed confidence degree being greater than a preset threshold value as the human head position in the image to be recognized.
Specifically, the human head recognition module 120 is specifically configured to, when sequentially inputting the plurality of output feature maps into the fusion network in the order from small to large, and performing feature fusion by using the fusion network to obtain a feature fusion map corresponding to each output feature map: taking the minimum output characteristic graph as an initial characteristic graph; performing convolution processing on the initial feature map by a convolution block to obtain a first intermediate feature map of the initial feature map, wherein the convolution block comprises a plurality of convolution layers and pooling layers with the same parameters; performing convolution processing on the first intermediate feature map to obtain a feature fusion map corresponding to the initial feature map; sequentially performing convolution and up-sampling processing on the first intermediate characteristic diagram to obtain a second intermediate characteristic diagram corresponding to the initial characteristic diagram; fusing the second intermediate feature map with a target output feature map to obtain an initial fusion map corresponding to the feature map, wherein the target output feature map is a subsequent feature map continuous with the initial feature map; and (4) replacing the initial characteristic diagram with the initial fusion diagram corresponding to the initial characteristic diagram, and repeatedly executing the steps until the characteristic fusion diagram corresponding to each output characteristic diagram is obtained.
And a head region determining module 130, configured to obtain a head region map in the image to be identified according to the head position.
Specifically, the head position includes a first coordinate and a second coordinate, and the head region determining module 130 is specifically configured to: taking a rectangular area in the image to be identified, which takes the first coordinate and the second coordinate as opposite angles, as an initial area map; and expanding the initial area map in a preset proportion to obtain a head area map.
And the safety helmet identification module 140 is configured to input the head region map into a pre-trained safety helmet identification model, and perform safety helmet identification on the head region map by using the safety helmet identification model to obtain an identification result of whether the head in the head region map is worn by a safety helmet.
Specifically, the safety helmet identification model comprises a second feature extraction network, a pooling layer, a full-connection layer and an activation layer; the helmet identification module 140 is specifically configured to: inputting the head region diagram into a second feature extraction network for image feature extraction to obtain an image feature diagram; performing maximum pooling processing on the image feature map by using a pooling layer to obtain a pooled feature map; reducing the dimension of the pooling feature map by using a full-connection layer to obtain a one-dimensional vector; and inputting the one-dimensional vector into the activation layer to obtain a probability value of the safety helmet worn on the head of the person, and obtaining an identification result of whether the head of the person in the head region graph wears the safety helmet or not according to the probability value.
Referring to fig. 9, fig. 9 shows a schematic structural diagram of an electronic device 10 according to an embodiment of the present invention, where the electronic device 10 includes a memory 11, a communication interface 12, a processor 13, and a bus 14. The memory 11, the communication interface 12 and the processor 13 are connected by a bus 14, the processor 13 is used for executing the computer program stored in the memory 11, and the above-mentioned helmet identification method can be applied to the electronic device 10.
The electronic device 10 may be, but is not limited to, a smart phone, a tablet computer, a notebook computer, a mainframe, a server, or the like.
The electronic device 10 communicates with other electronic devices through a communication interface 12.
The Memory 11 may include a high-speed Random Access Memory (RAM) and may also include a non-volatile Memory (non-volatile Memory), such as at least one disk Memory.
The bus 14 may be an ISA bus, PCI bus, EISA bus, or the like. Only one bi-directional arrow is shown in fig. 9, but this does not indicate only one bus or one type of bus.
The memory 11 is used for storing a program, such as the helmet identification apparatus 100 in fig. 7, and the processor 13 executes the program after receiving the execution instruction to implement the helmet identification method disclosed in the above embodiment of the present invention.
The processor 13 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 13. The Processor 13 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic, discrete hardware components.
The present embodiment provides a computer-readable storage medium on which a computer program is stored, which computer program, when executed by a processor 13, implements a helmet identification method as described in any of the preceding embodiments.
In summary, embodiments of the present invention provide a method and an apparatus for identifying a safety helmet, an electronic device, and a storage medium, where the method includes: acquiring an image to be identified; inputting an image to be recognized into a pre-trained human head recognition model, and recognizing the human head of the image to be recognized by using the human head recognition model to obtain the position of the human head in the image to be recognized; obtaining a human head area image in the image to be identified according to the human head position; and inputting the head area diagram into a pre-trained safety helmet identification model, and carrying out safety helmet identification on the head area diagram by using the safety helmet identification model to obtain an identification result of whether the head in the head area diagram is worn with a safety helmet. Compared with the prior art, the embodiment of the invention can further identify the identification result whether the head of the head position is worn with the safety helmet or not by identifying the head position firstly and then based on the head position, thereby reducing the interference caused by the complex background in the image to be identified and improving the identification accuracy of the safety helmet.
The above description is only for the specific embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. A method of identifying a hard hat, the method comprising:
acquiring an image to be identified;
inputting the image to be recognized into a pre-trained human head recognition model, and recognizing the human head of the image to be recognized by using the human head recognition model to obtain the position of the human head in the image to be recognized;
obtaining a human head area image in the image to be identified according to the human head position;
and inputting the head area diagram into a pre-trained safety helmet identification model, and carrying out safety helmet identification on the head area diagram by using the safety helmet identification model to obtain an identification result of whether the head in the head area diagram is worn with a safety helmet.
2. The helmet identification method of claim 1 wherein the head recognition model comprises a first feature extraction network, a fusion network, detectors, and a post-processing module, wherein the number of detectors is the same as the number of feature maps output by the feature extraction network;
the step of utilizing the human head recognition model to carry out human head recognition on the image to be recognized to obtain the human head position in the image to be recognized comprises the following steps:
performing feature extraction on the image to be identified by using the first feature extraction network to obtain a plurality of output feature maps with sequentially increasing sizes;
sequentially inputting the output feature maps into the fusion network according to the sequence from small to large, and performing feature fusion by using the fusion network to obtain a feature fusion map corresponding to each output feature map;
performing target detection on each feature fusion image by using the detector corresponding to each output feature image to obtain the position of a human head target frame in each feature fusion image and the confidence of the corresponding human head target frame;
and processing the positions of the human head target frames in all the feature fusion images by utilizing the post-processing module according to the confidence degrees of the human head target frames in all the feature fusion images, and taking the position in the image to be recognized, corresponding to the position of the human head target frame in the feature fusion images with the processed confidence degrees larger than a preset threshold value, as the human head position in the image to be recognized.
3. The method for identifying a helmet according to claim 2, wherein the step of sequentially inputting a plurality of the output feature maps into the fusion network in descending order, and performing feature fusion using the fusion network to obtain a feature fusion map corresponding to each of the output feature maps comprises:
taking the minimum output characteristic graph as an initial characteristic graph;
performing convolution processing on the initial feature map by a convolution block to obtain a first intermediate feature map of the initial feature map, wherein the convolution block comprises a plurality of convolution layers and pooling layers with the same parameters;
performing convolution processing on the first intermediate feature map to obtain a feature fusion map corresponding to the initial feature map;
sequentially performing convolution and up-sampling processing on the first intermediate characteristic diagram to obtain a second intermediate characteristic diagram corresponding to the initial characteristic diagram;
fusing the second intermediate feature map with a target output feature map to obtain an initial fusion map corresponding to the initial feature map, wherein the target output feature map is a subsequent feature map continuous with the initial feature map;
and replacing the initial feature map with the initial fusion map corresponding to the initial feature map, and repeatedly executing the steps until the feature fusion map corresponding to each output feature map is obtained.
4. The method of claim 1, wherein the helmet identification model comprises a second feature extraction network, a pooling layer, a full connectivity layer, and an activation layer;
the step of utilizing the safety helmet identification model to carry out safety helmet identification on the head area diagram to obtain the identification result of whether the head in the head area diagram is worn with a safety helmet or not comprises the following steps:
inputting the head region diagram into the second feature extraction network for image feature extraction to obtain an image feature diagram;
performing maximum pooling processing on the image feature map by using the pooling layer to obtain a pooled feature map;
reducing the dimension of the pooling feature map by using the full-connection layer to obtain a one-dimensional vector;
and inputting the one-dimensional vector into the activation layer to obtain a probability value of the helmet worn on the head of the person, so as to obtain an identification result of whether the head of the person in the head region graph is worn with the helmet or not according to the probability value.
5. The method for identifying a helmet according to claim 1, wherein the head position includes a first coordinate and a second coordinate, and the step of obtaining the head region map in the image to be identified according to the head position of the head in the image to be identified comprises:
taking a rectangular area in the image to be identified, which takes the first coordinate and the second coordinate as opposite angles, as an initial area map;
and expanding the initial area map in a preset proportion to obtain the head area map.
6. An apparatus for identifying a safety helmet, the apparatus comprising:
the acquisition module is used for acquiring an image to be identified;
the human head recognition module is used for inputting the image to be recognized into a pre-trained human head recognition model, and performing human head recognition on the image to be recognized by using the human head recognition model to obtain the position of the human head in the image to be recognized;
the head area determining module is used for obtaining a head area image in the image to be identified according to the head position;
and the safety helmet identification module is used for inputting the head area diagram into a safety helmet identification model trained in advance, and utilizing the safety helmet identification model to carry out safety helmet identification on the head area diagram to obtain an identification result of whether the head in the head area diagram is worn with a safety helmet.
7. The headgear identification device according to claim 6, wherein the head recognition model includes a first feature extraction network, a fusion network, detectors, and a post-processing module, wherein the number of detectors is the same as the number of feature maps output by the feature extraction network;
the head identification module is specifically used for:
performing feature extraction on the image to be identified by using the first feature extraction network to obtain a plurality of output feature maps with sequentially increasing sizes;
sequentially inputting the output feature maps into the fusion network according to the sequence from small to large, and performing feature fusion by using the fusion network to obtain a feature fusion map corresponding to each output feature map;
performing target detection on each feature fusion image by using the detector corresponding to each output feature image to obtain the position of a human head target frame in each feature fusion image and the confidence of the corresponding human head target frame;
and processing the positions of the human head target frames in all the feature fusion images by utilizing the post-processing module according to the confidence degrees of the human head target frames in all the feature fusion images, and taking the position in the image to be recognized, corresponding to the position of the human head target frame in the feature fusion images with the processed confidence degrees larger than a preset threshold value, as the human head position in the image to be recognized.
8. The headgear identification device of claim 6, wherein the headgear identification model comprises a second feature extraction network, a pooling layer, a full connectivity layer, and an activation layer;
the safety helmet identification module is specifically configured to:
inputting the head region diagram into the second feature extraction network for image feature extraction to obtain an image feature diagram;
performing maximum pooling processing on the image feature map by using the pooling layer to obtain a pooled feature map;
reducing the dimension of the pooling feature map by using the full-connection layer to obtain a one-dimensional vector;
and inputting the one-dimensional vector into the activation layer to obtain a probability value of the helmet worn on the head of the person, so as to obtain an identification result of whether the head of the person in the head region graph is worn with the helmet or not according to the probability value.
9. An electronic device, characterized in that the electronic device comprises:
one or more processors;
memory to store one or more programs that, when executed by the one or more processors, cause the one or more processors to implement the headgear identification method of any of claims 1-5.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method of identifying a crash helmet according to any one of claims 1 to 5.
CN202010203042.0A 2020-03-20 2020-03-20 Helmet identification method and device, electronic equipment and storage medium Pending CN111401278A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010203042.0A CN111401278A (en) 2020-03-20 2020-03-20 Helmet identification method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010203042.0A CN111401278A (en) 2020-03-20 2020-03-20 Helmet identification method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN111401278A true CN111401278A (en) 2020-07-10

Family

ID=71432770

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010203042.0A Pending CN111401278A (en) 2020-03-20 2020-03-20 Helmet identification method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN111401278A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149512A (en) * 2020-08-28 2020-12-29 成都飞机工业(集团)有限责任公司 Helmet wearing identification method based on two-stage deep learning
CN112149513A (en) * 2020-08-28 2020-12-29 成都飞机工业(集团)有限责任公司 Industrial manufacturing site safety helmet wearing identification system and method based on deep learning
CN112861751A (en) * 2021-02-22 2021-05-28 中国中元国际工程有限公司 Airport luggage room personnel management method and device
CN113139426A (en) * 2021-03-12 2021-07-20 浙江智慧视频安防创新中心有限公司 Detection method and device for wearing safety helmet, storage medium and terminal
CN113435343A (en) * 2021-06-29 2021-09-24 重庆紫光华山智安科技有限公司 Image recognition method and device, computer equipment and storage medium
CN114445710A (en) * 2022-01-29 2022-05-06 北京百度网讯科技有限公司 Image recognition method, image recognition device, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504369A (en) * 2014-12-12 2015-04-08 无锡北邮感知技术产业研究院有限公司 Wearing condition detection method for safety helmets
CN109034215A (en) * 2018-07-09 2018-12-18 东北大学 A kind of safety cap wearing detection method based on depth convolutional neural networks
US20190051292A1 (en) * 2017-08-14 2019-02-14 Samsung Electronics Co., Ltd. Neural network method and apparatus
CN109635697A (en) * 2018-12-04 2019-04-16 国网浙江省电力有限公司电力科学研究院 Electric operating personnel safety dressing detection method based on YOLOv3 target detection
CN109753898A (en) * 2018-12-21 2019-05-14 中国三峡建设管理有限公司 A kind of safety cap recognition methods and device
CN110210274A (en) * 2018-02-28 2019-09-06 杭州海康威视数字技术股份有限公司 Safety cap detection method, device and computer readable storage medium
CN110689054A (en) * 2019-09-10 2020-01-14 华中科技大学 Worker violation monitoring method
CN110852183A (en) * 2019-10-21 2020-02-28 广州大学 Method, system, device and storage medium for identifying person without wearing safety helmet

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104504369A (en) * 2014-12-12 2015-04-08 无锡北邮感知技术产业研究院有限公司 Wearing condition detection method for safety helmets
US20190051292A1 (en) * 2017-08-14 2019-02-14 Samsung Electronics Co., Ltd. Neural network method and apparatus
CN110210274A (en) * 2018-02-28 2019-09-06 杭州海康威视数字技术股份有限公司 Safety cap detection method, device and computer readable storage medium
CN109034215A (en) * 2018-07-09 2018-12-18 东北大学 A kind of safety cap wearing detection method based on depth convolutional neural networks
CN109635697A (en) * 2018-12-04 2019-04-16 国网浙江省电力有限公司电力科学研究院 Electric operating personnel safety dressing detection method based on YOLOv3 target detection
CN109753898A (en) * 2018-12-21 2019-05-14 中国三峡建设管理有限公司 A kind of safety cap recognition methods and device
CN110689054A (en) * 2019-09-10 2020-01-14 华中科技大学 Worker violation monitoring method
CN110852183A (en) * 2019-10-21 2020-02-28 广州大学 Method, system, device and storage medium for identifying person without wearing safety helmet

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
JOSEPH REDMON 等: "YOLOv3: An Incremental Improvement", 《ARXIV》 *
MINGLIANG ZHONG 等: "A YOLOv3-based non-helmet-use detection for seafarer safety aboard merchant ships", 《ICAITA 2019》 *
小绿叶: "一文看懂YOLO v3", 《HTTPS://ZHUANLAN.ZHIHU.COM/P/60944510》 *
方明 等: "基于改进YOLO v2的快速安全帽佩戴情况检测", 《光学精密工程》 *
林俊 等: "基于YOLO的安全帽检测方法", 《计算机系统应用》 *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112149512A (en) * 2020-08-28 2020-12-29 成都飞机工业(集团)有限责任公司 Helmet wearing identification method based on two-stage deep learning
CN112149513A (en) * 2020-08-28 2020-12-29 成都飞机工业(集团)有限责任公司 Industrial manufacturing site safety helmet wearing identification system and method based on deep learning
CN112861751A (en) * 2021-02-22 2021-05-28 中国中元国际工程有限公司 Airport luggage room personnel management method and device
CN112861751B (en) * 2021-02-22 2024-01-12 中国中元国际工程有限公司 Airport luggage room personnel management method and device
CN113139426A (en) * 2021-03-12 2021-07-20 浙江智慧视频安防创新中心有限公司 Detection method and device for wearing safety helmet, storage medium and terminal
CN113435343A (en) * 2021-06-29 2021-09-24 重庆紫光华山智安科技有限公司 Image recognition method and device, computer equipment and storage medium
CN113435343B (en) * 2021-06-29 2022-11-29 重庆紫光华山智安科技有限公司 Image recognition method and device, computer equipment and storage medium
CN114445710A (en) * 2022-01-29 2022-05-06 北京百度网讯科技有限公司 Image recognition method, image recognition device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN111401278A (en) Helmet identification method and device, electronic equipment and storage medium
US11574187B2 (en) Pedestrian attribute identification and positioning method and convolutional neural network system
CN110363182B (en) Deep learning-based lane line detection method
CN107358149B (en) Human body posture detection method and device
CN107358258B (en) SAR image target classification based on NSCT double CNN channels and selective attention mechanism
CN108875540B (en) Image processing method, device and system and storage medium
CN110263712B (en) Coarse and fine pedestrian detection method based on region candidates
KR20160143494A (en) Saliency information acquisition apparatus and saliency information acquisition method
CN110119726B (en) Vehicle brand multi-angle identification method based on YOLOv3 model
CN106845406A (en) Head and shoulder detection method and device based on multitask concatenated convolutional neutral net
WO2020220663A1 (en) Target detection method and apparatus, device, and storage medium
EP1774470A1 (en) Object recognition method and apparatus therefor
CN110929593A (en) Real-time significance pedestrian detection method based on detail distinguishing and distinguishing
CN111524145A (en) Intelligent picture clipping method and system, computer equipment and storage medium
CN112115775A (en) Smoking behavior detection method based on computer vision in monitoring scene
CN111860309A (en) Face recognition method and system
CN111967464A (en) Weak supervision target positioning method based on deep learning
Liu et al. Smoke-detection framework for high-definition video using fused spatial-and frequency-domain features
CN111209873A (en) High-precision face key point positioning method and system based on deep learning
CN112464797A (en) Smoking behavior detection method and device, storage medium and electronic equipment
CN113487610B (en) Herpes image recognition method and device, computer equipment and storage medium
CN111382638A (en) Image detection method, device, equipment and storage medium
CN112989958A (en) Helmet wearing identification method based on YOLOv4 and significance detection
CN111881803B (en) Face recognition method based on improved YOLOv3
Ramzan et al. Automatic Unusual Activities Recognition Using Deep Learning in Academia.

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200710