CN113052107A - Method for detecting wearing condition of safety helmet, computer equipment and storage medium - Google Patents

Method for detecting wearing condition of safety helmet, computer equipment and storage medium Download PDF

Info

Publication number
CN113052107A
CN113052107A CN202110357038.4A CN202110357038A CN113052107A CN 113052107 A CN113052107 A CN 113052107A CN 202110357038 A CN202110357038 A CN 202110357038A CN 113052107 A CN113052107 A CN 113052107A
Authority
CN
China
Prior art keywords
image
target
frame
head
head target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110357038.4A
Other languages
Chinese (zh)
Other versions
CN113052107B (en
Inventor
王强
王亮
贾亚冲
杨阳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Huaxia Qixin Technology Co ltd
Original Assignee
Beijing Huaxia Qixin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Huaxia Qixin Technology Co ltd filed Critical Beijing Huaxia Qixin Technology Co ltd
Priority to CN202110357038.4A priority Critical patent/CN113052107B/en
Publication of CN113052107A publication Critical patent/CN113052107A/en
Application granted granted Critical
Publication of CN113052107B publication Critical patent/CN113052107B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/243Classification techniques relating to the number of classes
    • G06F18/24323Tree-organised classifiers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The application relates to a method, computer equipment and storage medium for detecting the wearing condition of a safety helmet, wherein the method comprises the following steps: acquiring a time sequence image of a preset area; detecting a head target in each frame of image to obtain the head target contained in each frame of image, the category and the category confidence coefficient of the head target and the image area where the head target is located, wherein the category comprises: a head target with and without a safety helmet; carrying out target tracking, and associating the same head target in the multi-frame images; for each head target detected in each frame of image, extracting preset image characteristics of an image area where each head target is located; and for each head target contained in each frame of image, determining the category of each head target corresponding to the current image by using a random forest classifier according to the category and the category confidence coefficient of each head target detected in the current image and the preamble image of the current image and the preset image characteristics extracted from the current image. Therefore, the detection accuracy is improved.

Description

Method for detecting wearing condition of safety helmet, computer equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, a computer device, and a storage medium for detecting a wearing condition of a safety helmet.
Background
The safety helmet is an important protective article for effectively protecting the head of an operator in a construction site, preventing and reducing various injury accidents and ensuring the life safety of the operator. A large number of facts prove that the accident occurrence frequency of a construction site can be effectively reduced by correctly wearing the safety helmet, and the accident injury risk is reduced.
In the related art, the construction site is not detected. In addition, the construction site is far in monitoring distance, large in coverage range, and in a movable state, and is distributed at different distances, so that the size difference of the personnel is large, and the construction site is also influenced by weather and the like.
In summary, no effective solution exists for accurately detecting the wearing condition of the safety helmet on the construction site.
Disclosure of Invention
To solve the above technical problem or at least partially solve the above technical problem, the present application provides a method, a computer device and a storage medium for detecting a wearing condition of a helmet.
In a first aspect, the present application provides a method for detecting a wearing condition of a safety helmet, comprising: acquiring a time sequence image of a preset area; detecting a head target in each frame of image, and obtaining the head target contained in each frame of image, the category of the head target, the category confidence degree and the image area where the head target is located, wherein the category comprises: a headgear-worn head target and a headgear-unworn head target; performing multi-target tracking on all detected head targets, and associating the same head target in multiple frames of images; for each head target detected in each frame of image, extracting preset image characteristics of an image area where each head target is located; and for each head target contained in each frame of image, determining the category of each head target corresponding to the current image by using a random forest classifier according to the category and the category confidence coefficient of each head target detected in the current image and the preamble image of the current image and the preset image characteristics extracted from the current image.
In certain embodiments, the above method further comprises: for each head target detected in each frame of image, determining the linear distance between the image area of each head target in the current image and the image area of each head target in the preamble image of the current image; and for each head target contained in each frame of image, determining the category of each head target corresponding to the current image by using a random forest classifier according to the category and the category confidence coefficient of each head target detected in the current image and the preamble image of the current image, the preset image characteristics extracted from the current image and the linear distance.
In certain embodiments, the above method further comprises: for the newly detected head target in each frame of image, a random forest classifier is used in the subsequent image to determine the category of the newly detected head target corresponding to the subsequent image.
In some embodiments, the preset image features include: image local texture features and/or color histogram features.
In some embodiments, the Local Binary Pattern (LBP) operator is used to describe the above-mentioned image Local texture features.
In some embodiments, acquiring a time series image of the preset region includes: acquiring a video acquired by a camera, wherein the visual field of the camera covers a preset area; and extracting multi-frame images from the video according to a preset condition to obtain a time sequence image of a preset area.
In certain embodiments, the above method further comprises: detecting a human body target in each frame of image; and alarming the wearing condition of the safety helmet according to the number of the head targets and the number of the human body targets in each frame of image.
In certain embodiments, the above method further comprises: detecting a human body in each frame of image to obtain a human body target contained in each frame of image; for each frame of image, if the number of head targets determined as not wearing the safety helmet is 0 and the number of head targets determined as wearing the safety helmet is not equal to the number of detected human body targets, marking the image as an abnormal frame; if the number of the head targets determined as not wearing the safety helmet is more than 0, marking the image as an alarm frame; and alarming according to the condition that the continuous multi-frame images are marked as abnormal frames and/or alarm frames.
In a second aspect, the present application provides a computer device comprising: a memory, a processor, and a computer program stored on the memory and executable on the processor; the computer program, when executed by a processor, performs the steps of any of the above-described methods for detecting the wearing condition of a crash helmet.
In a third aspect, the present application provides a computer-readable storage medium, on which a program for detecting a wearing condition of a safety helmet is stored, and when the program for detecting a wearing condition of a safety helmet is executed by a processor, the method for detecting a wearing condition of a safety helmet includes the steps of any one of the above-mentioned methods for detecting a wearing condition of a safety helmet.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: according to the method provided by the embodiment of the application, target detection and target tracking are carried out on the head targets wearing the safety helmet and the head targets not wearing the safety helmet, the categories, the category confidence degrees and the image areas of the same head target in the current image and the pre-arranged images of the current image are obtained, the preset image characteristics of the area where the head target is located are extracted, the category of the head target corresponding to the current image is determined by using a random forest classifier according to the categories, the category confidence degrees and the preset image characteristics of the head target in the current image, the current image and the pre-arranged images of the head target in the pre-arranged images, and the head targets wearing the safety helmet and the head targets not wearing the safety helmet are accurately identified, particularly the head targets wearing the safety helmet and the head targets not wearing the safety helmet can be accurately identified under the conditions of long monitoring distance and large coverage range.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
FIG. 1 is a schematic diagram illustrating an embodiment of a system for detecting a wearing condition of a safety helmet according to an embodiment of the present disclosure;
FIG. 2 is a flowchart of an embodiment of a method for detecting a wearing condition of a safety helmet according to an embodiment of the present disclosure;
FIG. 3 is a flowchart of an example of a method for detecting a wearing condition of a safety helmet according to an embodiment of the present disclosure;
FIG. 4 is a diagram illustrating an embodiment of a random forest classifier provided in an embodiment of the present application;
FIG. 5 is an example of LBP characteristics at different brightnesses in an embodiment of the present application;
FIG. 6 is an example of a color histogram feature of differently colored headgear of an embodiment of the present application; and
fig. 7 is a hardware schematic diagram of an implementation manner of a computer device according to an embodiment of the present application.
Detailed Description
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The embodiment of the present application provides a system for detecting a wearing condition of a safety helmet, as shown in fig. 1, the system 100 includes: one or more cameras 101, a computer device 102, and a client 103. It should be understood that the embodiments of the present application are not limited thereto, and for example, a camera and a computer device may be integrated.
In the embodiment of the present application, the camera 101 is disposed at the construction site, and the field of view covers at least a partial area of the construction site to acquire a video image of the covered area. In some examples, the camera 101 is fixed to cover an area, and video images of the area are acquired according to preset conditions. In some examples, the job site is divided into a plurality of sub-areas, each sub-area having a camera 101 to capture video images of the sub-area. In some examples, the camera 101 is configured to be adjustable, which can cover different areas at different times to monitor multiple areas.
In the embodiment of the application, the computer device 102 is in communication connection with the camera 101, and the computer device 102 is configured to control the camera 101, receive a video image collected by the camera 101, and detect the wearing condition of the helmet according to the image video collected by the camera 101. In some examples, the computer device 102 is configured to alert based on the detection and send an alert message to the client 103 to notify the construction site of the headgear wear.
In the embodiment of the application, because the environment of a construction site is often severe, and the algorithm is greatly influenced by factors such as reflection, similar objects, shooting distance and the like, a large amount of missed detection and false detection can occur, and particularly, the detection results between different frames are unstable.
In addition, in order to cover a wider range, the camera 101 is generally installed far from the working surface, and the helmet is small, which results in a high false alarm rate. In some cases, the camera 101 needs to cruise in a wider range, and the size of the detected target is greatly different, which results in a high false alarm rate.
Furthermore, field constructors often need to bend over or squat during operation, often appear being sheltered from by support or wall, and it is difficult to detect, or the head position detection after detecting is inaccurate. In addition, a group of people gather together to carry out operation, and people can not cut out head position pictures after the human body is detected, so that the safety helmet detection can not be realized.
In order to at least partially solve the above problem, embodiments of the present application provide a method for detecting a wearing condition of a safety helmet. The method comprehensively utilizes the multi-target tracking and random forest classifier to detect the wearing of the safety helmet, is oriented to an open construction site, solves the problems of detection omission, unstable detection result, incapability of realizing full-coverage detection of the construction site and the like when a worker in the construction site correctly wears the safety helmet for detection, realizes the detection of the worker wearing the safety helmet in the construction site, and can realize on-line real-time detection.
According to the actual construction environment, the camera 101 is reasonably installed, and the preset point and the cruise cycle of the camera 101 can be set, so that the on-site image is obtained. And monitoring the construction area to be monitored, and judging whether the condition that the safety helmet is not worn exists in the area. In some examples, the alarm accuracy is improved by comprehensively judging the detection results of the human body target, the head target with the safety helmet and the head target without the safety helmet in the multi-frame video to output the alarm in consideration of the missing detection and the false detection of the algorithm.
The method for detecting the wearing condition of the safety helmet according to the embodiment of the present application is described below with reference to the system 100 shown in fig. 1.
Fig. 2 is a flowchart of an embodiment of a method for detecting a wearing condition of a safety helmet provided in the present application, and as shown in fig. 2, the method includes steps S202 to S210. It should be understood that, although the steps in the embodiments of the present application have numbers, this is not a limitation on the execution order of the steps, and the steps may be executed in sequence or synchronously as needed.
Step S202, a time-series image of a preset area is acquired.
In step S202, the time-series image may be all frame images of the video image or an image extracted from the video image. In practical applications, images are extracted from video images by setting frame extraction conditions, such as the number of frames extracted per second, and the like.
Step S204, detecting the head target in each frame of image, and obtaining the head target contained in each frame of image, the category of the head target, the category confidence degree and the image area where the head target is located.
In step S204, as an exemplary illustration, the target detection is performed by the YOLO algorithm using a target detection model based on deep learning, but the embodiment of the present application is not limited thereto, and other target detection methods are also possible.
In step S204, the categories of the head target include: a head target with a hard hat and a head target without a hard hat.
In step S204, a head target is detected, and the type of the detected head target is determined. The class confidence is the probability that the head target is identified as the head target on which the safety helmet is worn, or the probability that the head target is identified as the head target on which the safety helmet is worn.
In step S204, as an exemplary illustration, the image area where the head target is located is a target frame on the image, and the image area where the head target is located is represented by vertex pixel coordinates of the target frame, but this is not limited in this embodiment of the application.
And step S206, performing multi-target tracking on all detected head targets, and associating the same head target in the multi-frame images.
In step S206, a plurality of head targets can be detected on each frame of image, and target tracking is performed on each head target by a multi-target tracking method, so as to associate the same head target in the multi-frame images. As an illustrative illustration, in some examples, multiple target tracking is performed using an SORT algorithm.
In step S206, the same Identification (ID) is assigned to the detected same head target, and head targets of the same ID are the same target.
Step S208, extracting preset image characteristics of an image area where each head target is located for each head target detected in each frame of image.
In step S208, one or more preset image features are extracted, the various image features facilitating the final classification of the head target.
Step S210, for each head target contained in each frame of image, determining the category of each head target corresponding to the current image by using a random forest classifier according to the category and the category confidence coefficient of each head target detected in the current image and the preamble image of the current image and the preset image characteristics extracted from the current image.
The category of the head target detected by each frame of image is determined using a random forest classifier, via step S210. The category of the head target is comprehensively judged by using a random forest classifier through the category and the category confidence coefficient of each head target detected in the current image and the preamble image of the current image and the preset image characteristics extracted from the current image, so that the accuracy of head target identification is improved.
According to the method provided by the embodiment of the application, target detection and target tracking are carried out on the head targets wearing the safety helmet and the head targets not wearing the safety helmet, the categories, the category confidence degrees and the image areas of the same head target in the current image and the pre-arranged images of the current image are obtained, the preset image characteristics of the area where the head target is located are extracted, the category of the head target corresponding to the current image is determined by using a random forest classifier according to the categories, the category confidence degrees and the preset image characteristics of the head target in the current image, the current image and the pre-arranged images of the head target in the pre-arranged images, and the head targets wearing the safety helmet and the head targets not wearing the safety helmet are accurately identified, particularly the head targets wearing the safety helmet and the head targets not wearing the safety helmet can be accurately identified under the conditions of long monitoring distance and large coverage range.
In some examples, the preset image feature is a color histogram feature. The color histogram is a global feature, which describes surface properties of scenes corresponding to images or image areas, and since the helmet target has several fixed types of colors, the accuracy of judging whether the helmet target is worn on the head target is improved through the color histogram feature.
In some examples, the preset image feature is an image local texture feature. When the brightness change is obvious, the local texture feature change of the image is small, so that the feature information of the safety helmet in an outdoor scene can be better embodied. In a preferred example, an LBP operator or its improved operator is used to characterize an image specific texture feature, and the LBP feature is an operator used to describe a local feature of an image, and has significant advantages of gray scale invariance and rotation invariance. Herein, the "LBP operator" is a class of operators, including improvements therein, and is not limited to one.
In some examples, the preset image features include color histogram features and image local texture features, so that the accuracy of judging whether the head target wears the safety helmet is improved through the color histogram features, and the adaptability to the brightness of the scene is improved through the image local texture features.
It should be understood that, in the embodiment of the present application, the preset image feature is not limited to the color histogram feature and the image local texture feature, and other image features are also conceivable, which are not limited by the embodiment of the present application, and one or more other image features may be adopted by a person skilled in the art. According to the method and the device, on the basis of the detection result of the current image, the detection result of the preamble image and the image characteristics of the image area where the head target is located in the image are combined, the random forest classifier is used, and the category of the head target corresponding to the current image is further determined, so that the identification accuracy is improved.
In some examples, to improve robustness and reduce target tracking errors, for each head target detected in each frame of image, a straight-line distance between an image region in which each head target is located in the current image and an image region in which each head target is located in a preceding image of the current image is also determined. In step S210, for each head target included in each frame of image, a random forest classifier is used to determine a category of each head target corresponding to the current image according to the category and the category confidence of each head target detected in the current image and the preamble image of the current image, the preset image features extracted from the current image, and the above straight-line distance.
In the above step S210, for the newly detected head target in each frame image, the category of the newly detected head target is determined in the subsequent image using the random forest classifier.
In some examples, in step S202, a video captured by a camera is acquired, where a field of view of the camera covers a preset area; and extracting multi-frame images from the video according to a preset condition to obtain a time sequence image of a preset area.
In some examples, the method further comprises: detecting a human body target in each frame of image; and alarming the wearing condition of the safety helmet according to the number of the head targets and the number of the human body targets in each frame of image. The number of the head targets and the number of the human body targets are compared, and the reliability of alarming is improved.
In some examples, the method further comprises: and tracking the head target without wearing the safety helmet to obtain a moving track diagram of the person without wearing the safety helmet. In some examples, when the alarm is given, the movement trace graph of the person without wearing the safety helmet is sent to the client side to inform the related person.
The embodiment of the present application further provides an alarm policy, and the method further includes: and detecting the human body in each frame of image to obtain the human body target contained in each frame of image. For each frame of image, if the number of head targets determined as not wearing the safety helmet is 0 and the number of head targets determined as wearing the safety helmet is not equal to the number of detected human body targets, marking the image as an abnormal frame; if the number of the head targets which are determined to be not wearing the safety helmet is more than 0, marking the image as an alarm frame; and alarming according to the condition that the continuous multi-frame images are marked as abnormal frames and/or alarm frames. For example, within a sliding window, an alert is issued if consecutive m-frame images are marked as alert frames.
An example of the embodiment of the present application is described below by taking a YOLO algorithm for target detection and an sortt algorithm for target tracking as an example.
Carrying out sample labeling on the head of a person wearing the safety helmet, the head of a person not wearing the safety helmet and the whole body of the person to train the detection model; and for the output of the model inference process, selecting a category and a category confidence coefficient, training a random forest classifier by combining the detection information of the previous frame of the image and other image characteristics of the positioning area, and further making a decision on the final result of the model.
The flowchart of this example is shown in fig. 3, and the present example is described below in conjunction with fig. 3.
Camera configuration and scheduling
According to the actual construction environment, the cameras are reasonably installed and deployed, and the preset points and the cruise cycle of the cameras are set, so that the on-site images are obtained. And for the key area, a fixed camera is adopted for real-time image acquisition.
The camera is preset, so that the full coverage of a construction site can be realized through a plurality of cameras; defining a camera cruising period and a preset point setting rule; the method comprises the steps of automatically scheduling a preset point of a camera, calling the camera to a specified preset point at regular time according to defined logic, combining with an intelligent detection algorithm to extract frames of a video, and determining the number of frames extracted per second according to the algorithm requirement to perform detection processing.
Model training and target detection
The target detection is performed by using a target detection algorithm based on deep learning, wherein the YOLO algorithm has a relatively fast detection speed and a relatively high accuracy, and is stated in the example by using the YOLOv3 algorithm.
And carrying out sample marking and model training aiming at the heads of personnel on a construction site, personnel wearing safety helmets and personnel not wearing safety helmets. And extracting a proper video frame from the monitoring video of the construction site as sample data, wherein 80% of all pictures are used as a training set, and the rest 20% are used as a testing set. And detecting personnel on a construction site in real time by using the trained target detection model, inputting each frame of video image in the detection process, and outputting the target position (the position of a human body and the position of the head), the type (whether a safety helmet is worn) and the type confidence coefficient detected by the current frame.
Multi-target tracking
And determining the target in the video image by adopting a multi-target tracking algorithm, continuously tracking the movement of the target, and drawing the movement track of the detected target in real time in a detection window.
The effect of the multi-target Tracking algorithm is closely related to the result of target Detection, because the mainstream multi-target Tracking algorithms are TBD (Tracking-by-Detection) strategies. Here, although the example is explained using a simple Online And real Tracking algorithm, it is needless to say that a deep sort or other Multiple Object Tracking or MOT algorithm may be used.
The SORT algorithm is composed of a Kalman filter and a Hungarian algorithm. The precondition of tracking a target by using the SORT algorithm is that a detector is used for detecting the target, and if the target is not accurately detected, the tracking effect is poor. The method comprises the following specific steps:
1) detecting a first frame of a video by using YOLO, establishing and initializing a tracker by using detected target information (Box, a target frame), allocating an ID (identity) to each target, and processing the Box information detected by the first frame by using a Kalman filter to generate state prediction and covariance prediction of a second frame.
2) Detecting a second frame of the video by using YOLO, solving an IOU (Intersection-Over-Union) for the obtained new target information and the target information predicted by a Kalman filter of the previous frame, obtaining the maximum unique matching (data association part) of the IOU in the two frames by using a Hungarian bipartite graph matching algorithm, and removing the matching pair with the matching value smaller than a threshold value, so that the same target in the previous and next video frames is matched.
3) And updating the Kalman tracker by using the matched target detection Box in the second frame, calculating Kalman gain, state updating and covariance updating, and outputting a state updating value as the tracking Box of the second frame. The tracker is reinitialized for targets that do not match in the second frame, and a new ID is assigned.
4) And repeating the second step and the third step until the video is finished.
Target detection and processing
For the same frame of image, detection is carried out based on a human body detection model, a wearing safety helmet model and a non-wearing safety helmet model respectively, and a corresponding target detection result, namely a target position (in an image area), a type and a type confidence coefficient, can be obtained by adopting a YOLOv3 or R-CNN similar algorithm. Establishing a target set:
SP(x) Assuming that the current frame is F (x), the previous frame is denoted as F (x-1), and the next frame is denoted as F (x + 1); each target is represented as T (x, y), wherein y is ID or 0 tracked by the target, and the human target is not taken as a tracking target and is only used as a basis for alarm decision output.
SH(x) Each target is denoted as H1(x, z), where z is the ID of the target tracking.
SN(x) Each target is denoted as H2(x, z), where z is the ID of the target tracking.
For set SH(x) And SN(x) Using a random forest classifier to make voting decision and determining the final category.
Random forest classifier design
The random forest is a classifier composed of a plurality of decision trees, and the classification decision of the random forest is determined by a majority of the classification results of the sub-decision trees forming the forest, and is an integrated learning method based on the decision trees, as shown in fig. 4. Compared with other classification algorithms, the random forest can tolerate noise better and has better generalization performance. The steps of constructing the random forest are as follows:
1) setting the number T of decision trees to be constructed by the random forest;
2) performing bootstrap resampling on sample data to generate a plurality of sample subsets; that is, randomly taking one out of the N samples each time, so that taking N times will finally result in N samples, wherein it is possible to take repeated samples;
3) features used to construct the decision tree are randomly extracted: randomly selecting m features from all candidate features each time to serve as candidate features for decision making under the current node;
4) constructing a decision tree by taking each resampled sample set as a training sample by using the selected representative characteristics;
5) after obtaining a plurality of decision trees with the preset number of decision trees, voting is carried out on the output result of each tree respectively, and the decision with the largest number of votes is used as the final decision of the random forest to be output.
For the same target, the designed random forest classifier selects the class and the class confidence of the target in the current frame, the class and the class confidence of the target in the previous frame (if the new ID is the ID, the random forest classifier is not passed through, and the decision is voted again for the next frame), the linear distance between the current frame and the same target in the previous frame, the Circular LBP characteristic and the color histogram characteristic of the image area where the target is located as the classification basis for training the random forest classifier.
The Circular LBP is improved from the common LBP characteristics, the LBP characteristics are operators for describing local characteristics of the image, and the Circular LBP has the obvious advantages of gray scale invariance, rotation invariance and the like. As shown in fig. 5, when the brightness change is obvious, the LBP feature change of the image is small, so that the feature information of the helmet in the outdoor scene can be better represented. The color histogram is a global feature describing surface properties of a scene corresponding to an image or an image region, and since a helmet target has several fixed types of colors, the color histogram of a helmet with different colors is shown in fig. 6.
In this example, the random forest classifier training steps are as follows:
1) for the image to be detected, firstly, a trained Yolov3 model is used for carrying out computational reasoning on the image to be detected, and the position, the category and the confidence coefficient information of the target are obtained.
2) For the target detected by YOLOv3 in the current frame, acquiring the LBP feature and the color histogram feature of the target, and simultaneously associating the target with the target in the previous frame by using an SORT algorithm.
3) Aiming at the target positioned by YOLOv3 in the current frame, selecting the category, the category confidence coefficient and the category confidence coefficient of the same target in the previous frame, the straight-line distance of the same target in the previous frame and the LBP (local binary pattern) feature and the color histogram feature of a target region as feature attributes to train a random forest classifier.
And classifying the detection result of each frame by using a trained random forest classifier, and determining the final detection result according to the final classification result of the decision tree.
Processing of decision outputs
The method comprises the following steps of alarming the detected personnel without wearing the safety helmet in a monitoring video, tracking the detected personnel without wearing the safety helmet to form a moving track graph, and designing an alarm logic in order to further reduce the influence of false detection and missed detection of an algorithm and frequent alarm on a system:
assuming that the current frame is F (x), the number of human body targets detected in the statistical frame F (x) is P (x), the number of head targets wearing safety helmets is H (x), and the number of head targets not wearing safety helmets is N (x);
if N (x) is 0 and H (x) is P (x), no person wearing the safety helmet is indicated, and no alarm is needed;
if n (x) is 0 and h (x) > p (x), there is probably a case of missing detection, or the head may be blocked, and the mark frame f (x) is an abnormal frame;
if n (x) is 0 and h (x) is less than p (x), indicating that there may be a serious human occlusion condition in the current frame, the labeled frame f (x) is an abnormal frame;
if N (x) >0, then there is probably a person without a safety helmet, and a set S (x) { target T (x, y) without a safety helmet in current frame F (x) }, and a mark frame F (x) is an alarm frame;
and for the information, establishing a sliding window W (m) for counting the alarm results of m continuous frames (wherein m can be set according to the situation), and if the targets without helmets continuously appear in the m frames, alarming the targets in the set, wherein the alarming comprises generating a video before triggering the alarm and the moving track of the alarm target. And for the abnormal frames which continuously appear, alarming can be set according to the requirements of users, so that the operators on duty can pay attention to the possible abnormalities at any time.
In this example, an alarm is given to a worker who does not wear a crash helmet; continuous detection and detection can be carried out according to user settings, and multiple groups of alarm information are generated; the trace tracking of the personnel without wearing the safety helmet is realized, and alarm information and video are formed.
The embodiment also provides computer equipment. The computer device 20 of the present embodiment includes at least, but is not limited to: a memory 21, a processor 22, which may be communicatively coupled to each other via a system bus, as shown in FIG. 7. It is noted that fig. 7 only shows a computer device 20 with components 21-22, but it is to be understood that not all shown components are required to be implemented, and that more or fewer components may be implemented instead.
In the present embodiment, the memory 21 (i.e., a readable storage medium) includes a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, and the like. In some embodiments, the storage 21 may be an internal storage unit of the computer device 20, such as a hard disk or a memory of the computer device 20. In other embodiments, the memory 21 may also be an external storage device of the computer device 20, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), or the like, provided on the computer device 20. Of course, the memory 21 may also include both internal and external storage devices of the computer device 20. In this embodiment, the memory 21 is generally used for storing an operating system installed in the computer device 20 and various application software, such as program codes of a method for detecting the wearing condition of a helmet. Further, the memory 21 may also be used to temporarily store various types of data that have been output or are to be output.
Processor 22 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 22 is typically used to control the overall operation of the computer device 20. In this embodiment, the processor 22 is configured to execute the program code stored in the memory 21 or process data, such as the program code of the method for detecting the wearing condition of the helmet, so as to implement the method for detecting the wearing condition of the helmet.
The present embodiment also provides a computer-readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application mall, etc., on which a computer program is stored, which when executed by a processor implements corresponding functions. The computer-readable storage medium of the embodiment is used for storing a program for detecting the wearing condition of the safety helmet, and when the program is executed by a processor, the steps of the method for detecting the wearing condition of the safety helmet are realized.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A method of detecting the wear of a safety helmet, comprising:
acquiring a time sequence image of a preset area;
detecting a head target in each frame of image, and obtaining the head target contained in each frame of image, the category of the head target, the category confidence degree and the image area where the head target is located, wherein the categories comprise: a headgear-worn head target and a headgear-unworn head target;
performing multi-target tracking on all detected head targets, and associating the same head target in multiple frames of images;
for each head target detected in each frame of image, extracting preset image characteristics of an image area where each head target is located;
and for each head target contained in each frame of image, determining the category of each head target corresponding to the current image by using a random forest classifier according to the category and the category confidence coefficient of each head target detected in the current image and the preamble image of the current image and the preset image characteristics extracted from the current image.
2. The method of claim 1, further comprising:
for each head target detected in each frame of image, determining a linear distance between an image area of each head target in the current image and an image area of each head target in a preamble image of the current image;
and for each head target contained in each frame of image, determining the category of each head target corresponding to the current image by using a random forest classifier according to the category and the category confidence coefficient of each head target detected in the current image and a preamble image of the current image, the preset image features extracted from the current image and the linear distance.
3. The method of claim 1 or 2, further comprising: for the newly detected head target in each frame of image, determining the category of the newly detected head target corresponding to the subsequent image in the subsequent image by using the random forest classifier.
4. The method according to claim 1 or 2, wherein the preset image features comprise: image local texture features and/or color histogram features.
5. The method of claim 4, wherein the image local texture features are described using an LBP operator.
6. The method of claim 1, wherein obtaining a time series of images of a predetermined area comprises:
acquiring a video acquired by a camera, wherein the visual field of the camera covers a preset area;
and extracting multi-frame images from the video according to a preset condition to obtain a time sequence image of the preset area.
7. The method of any one of claims 1 to 6, further comprising: detecting a human body target in each frame of image; and alarming the wearing condition of the safety helmet according to the number of the head targets and the number of the human body targets in each frame of image.
8. The method of any one of claims 1 to 6, further comprising:
detecting a human body in each frame of image to obtain a human body target contained in each frame of image;
for each frame of image, if the number of head targets determined as not wearing the safety helmet is 0 and the number of head targets determined as wearing the safety helmet is not equal to the number of detected human body targets, marking the image as an abnormal frame; if the number of the head targets determined as not wearing the safety helmet is more than 0, marking the image as an alarm frame;
and alarming according to the condition that the continuous multi-frame images are marked as abnormal frames and/or alarm frames.
9. A computer device, characterized in that the computer device comprises:
a memory, a processor, and a computer program stored on the memory and executable on the processor;
the computer program, when being executed by the processor, realizes the steps of the method of detecting a headgear wear condition according to any one of claims 1 to 8.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a program for detecting a wearing condition of a helmet, which program, when executed by a processor, implements the steps of the method for detecting a wearing condition of a helmet as claimed in any one of claims 1 to 8.
CN202110357038.4A 2021-04-01 2021-04-01 Method for detecting wearing condition of safety helmet, computer equipment and storage medium Active CN113052107B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110357038.4A CN113052107B (en) 2021-04-01 2021-04-01 Method for detecting wearing condition of safety helmet, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110357038.4A CN113052107B (en) 2021-04-01 2021-04-01 Method for detecting wearing condition of safety helmet, computer equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113052107A true CN113052107A (en) 2021-06-29
CN113052107B CN113052107B (en) 2023-10-24

Family

ID=76517534

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110357038.4A Active CN113052107B (en) 2021-04-01 2021-04-01 Method for detecting wearing condition of safety helmet, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113052107B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113553963A (en) * 2021-07-27 2021-10-26 广联达科技股份有限公司 Detection method and device of safety helmet, electronic equipment and readable storage medium
CN113554682A (en) * 2021-08-03 2021-10-26 同济大学 Safety helmet detection method based on target tracking
CN113658219A (en) * 2021-07-22 2021-11-16 浙江大华技术股份有限公司 High-altitude parabolic detection method, device and system, electronic device and storage medium
CN113743214A (en) * 2021-08-02 2021-12-03 国网安徽省电力有限公司检修分公司 Intelligent pan-tilt camera
CN116958707A (en) * 2023-08-18 2023-10-27 武汉市万睿数字运营有限公司 Image classification method, device and related medium based on spherical machine monitoring equipment

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108288033A (en) * 2018-01-05 2018-07-17 电子科技大学 A kind of safety cap detection method merging multiple features based on random fern
US20190108392A1 (en) * 2017-10-10 2019-04-11 Caterpillar Inc. Method and system for tracking workers at worksites
CN110602449A (en) * 2019-09-01 2019-12-20 天津大学 Intelligent construction safety monitoring system method in large scene based on vision
CN110852283A (en) * 2019-11-14 2020-02-28 南京工程学院 Helmet wearing detection and tracking method based on improved YOLOv3
CN111191581A (en) * 2019-12-27 2020-05-22 深圳供电局有限公司 Safety helmet detection method and device based on electric power construction and computer equipment
CN111723749A (en) * 2020-06-23 2020-09-29 广东电网有限责任公司 Method, system and equipment for identifying wearing of safety helmet
CN111815577A (en) * 2020-06-23 2020-10-23 深圳供电局有限公司 Method, device, equipment and storage medium for processing safety helmet wearing detection model
CN111913799A (en) * 2020-07-14 2020-11-10 北京华夏启信科技有限公司 Video stream online analysis task scheduling method and computer equipment
CN111914636A (en) * 2019-11-25 2020-11-10 南京桂瑞得信息科技有限公司 Method and device for detecting whether pedestrian wears safety helmet

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190108392A1 (en) * 2017-10-10 2019-04-11 Caterpillar Inc. Method and system for tracking workers at worksites
CN108288033A (en) * 2018-01-05 2018-07-17 电子科技大学 A kind of safety cap detection method merging multiple features based on random fern
CN110602449A (en) * 2019-09-01 2019-12-20 天津大学 Intelligent construction safety monitoring system method in large scene based on vision
CN110852283A (en) * 2019-11-14 2020-02-28 南京工程学院 Helmet wearing detection and tracking method based on improved YOLOv3
CN111914636A (en) * 2019-11-25 2020-11-10 南京桂瑞得信息科技有限公司 Method and device for detecting whether pedestrian wears safety helmet
CN111191581A (en) * 2019-12-27 2020-05-22 深圳供电局有限公司 Safety helmet detection method and device based on electric power construction and computer equipment
CN111723749A (en) * 2020-06-23 2020-09-29 广东电网有限责任公司 Method, system and equipment for identifying wearing of safety helmet
CN111815577A (en) * 2020-06-23 2020-10-23 深圳供电局有限公司 Method, device, equipment and storage medium for processing safety helmet wearing detection model
CN111913799A (en) * 2020-07-14 2020-11-10 北京华夏启信科技有限公司 Video stream online analysis task scheduling method and computer equipment

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
ARYA K M 等: "A Review on Deep Learning Based Helmet Detection", 《ICSEE》 *
秦嘉 等: "基于深度学习的安全帽佩戴检测与跟踪", 计算机与现代化, no. 06 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113658219A (en) * 2021-07-22 2021-11-16 浙江大华技术股份有限公司 High-altitude parabolic detection method, device and system, electronic device and storage medium
CN113553963A (en) * 2021-07-27 2021-10-26 广联达科技股份有限公司 Detection method and device of safety helmet, electronic equipment and readable storage medium
CN113743214A (en) * 2021-08-02 2021-12-03 国网安徽省电力有限公司检修分公司 Intelligent pan-tilt camera
CN113743214B (en) * 2021-08-02 2023-12-12 国网安徽省电力有限公司超高压分公司 Intelligent cradle head camera
CN113554682A (en) * 2021-08-03 2021-10-26 同济大学 Safety helmet detection method based on target tracking
CN116958707A (en) * 2023-08-18 2023-10-27 武汉市万睿数字运营有限公司 Image classification method, device and related medium based on spherical machine monitoring equipment
CN116958707B (en) * 2023-08-18 2024-04-23 武汉市万睿数字运营有限公司 Image classification method, device and related medium based on spherical machine monitoring equipment

Also Published As

Publication number Publication date
CN113052107B (en) 2023-10-24

Similar Documents

Publication Publication Date Title
CN113052107B (en) Method for detecting wearing condition of safety helmet, computer equipment and storage medium
CN108053427B (en) Improved multi-target tracking method, system and device based on KCF and Kalman
CN108062349B (en) Video monitoring method and system based on video structured data and deep learning
CN108009473B (en) Video structuralization processing method, system and storage device based on target behavior attribute
CN108052859B (en) Abnormal behavior detection method, system and device based on clustering optical flow characteristics
JP7238217B2 (en) A system for identifying defined objects
US8744125B2 (en) Clustering-based object classification
CN108229297B (en) Face recognition method and device, electronic equipment and computer storage medium
US20140003710A1 (en) Unsupervised learning of feature anomalies for a video surveillance system
CN108009466B (en) Pedestrian detection method and device
CN112396658A (en) Indoor personnel positioning method and positioning system based on video
Bell et al. A novel system for nighttime vehicle detection based on foveal classifiers with real-time performance
CN112733814B (en) Deep learning-based pedestrian loitering retention detection method, system and medium
CN111010547A (en) Target object tracking method and device, storage medium and electronic device
CN109255360B (en) Target classification method, device and system
CN114782897A (en) Dangerous behavior detection method and system based on machine vision and deep learning
CN115690892B (en) Mitigation method and device, electronic equipment and storage medium
CN111797726A (en) Flame detection method and device, electronic equipment and storage medium
CN111191507A (en) Safety early warning analysis method and system for smart community
CN111263955A (en) Method and device for determining movement track of target object
KR20180085505A (en) System for learning based real time guidance through face recognition and the method thereof
CN116778673A (en) Water area safety monitoring method, system, terminal and storage medium
SE519700C2 (en) Image Data Processing
CN113947795A (en) Mask wearing detection method, device, equipment and storage medium
CN112347989A (en) Reflective garment identification method and device, computer equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant