CN114155601A - Vision-based method and system for detecting dangerous behaviors of operating personnel - Google Patents
Vision-based method and system for detecting dangerous behaviors of operating personnel Download PDFInfo
- Publication number
- CN114155601A CN114155601A CN202111459039.6A CN202111459039A CN114155601A CN 114155601 A CN114155601 A CN 114155601A CN 202111459039 A CN202111459039 A CN 202111459039A CN 114155601 A CN114155601 A CN 114155601A
- Authority
- CN
- China
- Prior art keywords
- human body
- operator
- dangerous
- dangerous behavior
- posture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G08—SIGNALLING
- G08B—SIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
- G08B21/00—Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
- G08B21/02—Alarms for ensuring the safety of persons
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Biomedical Technology (AREA)
- Data Mining & Analysis (AREA)
- Emergency Management (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Business, Economics & Management (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Alarm Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a method and a system for detecting dangerous behaviors of an operator based on vision, which belong to the field of behavior detection and identification, and are characterized in that firstly, human body target detection is carried out on the operator in a monitoring picture to obtain position information of the operator; positioning key points of human body postures of the monitored picture after human body target detection; the human body posture key point positioning determines the current behavior posture of the human body in a mode of positioning a plurality of human body key points; according to the position information and the human body posture key point positioning information, dangerous behavior analysis is carried out on each operator in the monitoring picture to obtain a dangerous behavior analysis result; and judging whether to activate an alarm or not according to the dangerous behavior analysis result. The method can effectively detect whether each operator has dangerous behaviors or not, and can give an alarm for the dangerous behaviors in time.
Description
Technical Field
The invention relates to the field of behavior detection and identification, in particular to a method and a system for detecting dangerous behaviors of operating personnel based on vision.
Background
In recent years, safety problems of operators on a production line are more and more concerned by society, abnormal behaviors of the operators are monitored mostly through manpower in current production workshops, and in workshops with more equipment and complex production processes, video monitoring blind areas are patrolled in a manual patrol mode, so that the efficiency is low, and dangerous behaviors of the operators cannot be analyzed and early warned. In a production line, when an operator has accidents such as falling down or even losing consciousness due to actions such as napping, watching a mobile phone, chatting and the like, treatment may be delayed because no one finds in time, so that the life is threatened. Therefore, not only are monitoring personnel in heavy monitoring and inspection tasks, but also the life safety of operating personnel on a production line cannot be guaranteed. Therefore, it is very important to automatically and effectively detect the dangerous behaviors of the operators and to early warn the dangerous behaviors in time.
The existing intelligent video monitoring method generally uses a target detection method based on deep learning, but the detection method can only detect the whole position of an operator, cannot identify and analyze the posture behavior of the operator, cannot identify the dangerous behavior of the operator, and cannot give early warning to the dangerous behavior in time. Therefore, there is a need for a method and a system capable of automatically detecting dangerous behaviors of operators and performing early warning in time, so as to solve the problem that the dangerous behaviors of the operators cannot be effectively detected and the dangerous behaviors cannot be early warned in time in the prior art.
Disclosure of Invention
The invention aims to provide a vision-based method and a vision-based system for detecting dangerous behaviors of operators, which can effectively detect the dangerous behaviors of the operators and give early warning to the dangerous behaviors in time, and solve the problems that the dangerous behaviors of the operators cannot be effectively detected and the dangerous behaviors cannot be early warned in time in the prior art.
In order to achieve the purpose, the invention provides the following scheme:
on one hand, the invention provides a vision-based method for detecting dangerous behaviors of operators, which comprises the following steps:
detecting a human body target of an operator in the monitoring picture to obtain position information of the operator;
positioning key points of human body postures of the monitored picture after human body target detection; the human body posture key point positioning determines the current behavior posture of the human body in a mode of positioning a plurality of human body key points;
according to the position information and the human body posture key point positioning information, dangerous behavior analysis is carried out on each operator in the monitoring picture to obtain a dangerous behavior analysis result;
and judging whether to activate an alarm or not according to the dangerous behavior analysis result.
Optionally, the human body posture key points include a left eye, a right eye, a nose, a left ear, a right ear, a neck, a left shoulder, a right shoulder, a left elbow, a right elbow, a left wrist, a right wrist, a left waist, a right waist, a left knee, a right knee, a left foot and a right foot.
Optionally, according to the position information and the human body posture key point positioning information, performing dangerous behavior analysis on each operator in the monitoring picture to obtain a dangerous behavior analysis result, which specifically includes:
setting a dangerous behavior sample library; the dangerous behavior sample library comprises a plurality of dangerous behavior samples, wherein the dangerous behavior samples comprise sample data of dangerous behavior categories of falling or falling, mobile phone playing, illegal leaning, climbing and personnel gathering;
identifying the current action posture and the surrounding environment information of the operator according to the position information and the human body posture key point positioning information; the surrounding environment information is used for judging whether a leaning object or a climbing object exists around the operator;
and comparing the current action posture and the ambient environment information of the operator with the dangerous behavior samples in the sample library to determine whether the current action posture of the operator is consistent with each dangerous behavior category in the dangerous behavior sample library, so as to obtain a dangerous behavior analysis result.
Optionally, the human body target detection is performed on the operator in the monitoring picture to obtain the position information of the operator, and the method specifically includes:
and (3) carrying out human body target detection on the operators in the monitoring picture by adopting a YOLOV5 deep network model to obtain the position information of each operator.
Optionally, the YOLOV5 deep network model includes a backhaul layer, a Neck layer, and a Head layer;
the backhaul layer is a Backbone network and is used for aggregating and forming image features on different image fine granularities; the Backbone layer comprises a Focus structure and a CSPNet structure, and the CSPNet structure is used for integrating gradient change into an image feature map;
the Neck layer is used for mixing and combining image features and transmitting the image features to the Head layer; the Neck layer comprises an FPN structure and a PAN structure;
the Head layer is used for predicting image characteristics, generating a boundary frame and a prediction category to obtain a prediction frame of the operator, and the prediction frame is a target detection result of the operator.
Optionally, the positioning of the key point of the human posture of the monitored image after the human target detection is performed specifically includes:
positioning key points of human body postures of the monitored picture after the human body target is detected by adopting OpenPose;
the OpenPose comprises a first branch structure and a second branch structure;
the first branch structure is used for predicting the confidence coefficient of the extracted key points of the human posture;
the second branch structure is used for coding and analyzing the association degree between each joint of the operator to obtain an affinity vector, and the confidence coefficient and the affinity vector of the human posture key point are subjected to speculation and analysis to realize the positioning of the human posture key point.
Optionally, before the step of "detecting the human body target of the operator in the monitoring picture to obtain the position information of each operator", the method further includes:
acquiring real-time video monitoring data of a production area to obtain a video stream;
and intercepting monitoring pictures from the video stream at a preset frequency, and preprocessing the monitoring pictures.
Optionally, the preprocessing the monitoring picture specifically includes:
cutting the monitoring picture to meet the requirement of the model on the size of the input picture to obtain the cut monitoring picture;
and carrying out noise reduction and filtering processing on the cut monitoring picture to obtain a preprocessed monitoring picture.
Optionally, the determining whether to activate an alarm according to the dangerous behavior analysis result specifically includes:
when dangerous behaviors occur, the alarm is immediately activated, and the dangerous behavior category and the position information of the operating personnel are sent to the processor so as to process the dangerous behaviors in time.
On the other hand, the invention also provides a vision-based detection system for dangerous behaviors of operators, which comprises:
the human body target detection module is used for detecting the human body target of the operator in the monitoring picture to obtain the position information of the operator;
the human body posture key point positioning module is used for positioning the human body posture key points of the monitored picture after the human body target detection; the human body posture key point positioning determines the current behavior posture of the human body in a mode of positioning a plurality of human body key points;
the dangerous behavior analysis module is used for carrying out dangerous behavior analysis on each operator in the monitoring picture according to the position information and the human body posture key point positioning information to obtain a dangerous behavior analysis result;
and the dangerous behavior alarm module is used for judging whether to activate the alarm or not according to the dangerous behavior analysis result.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the invention provides a vision-based method for detecting dangerous behaviors of operators, which is used for detecting human body targets and positioning human body posture key points of the operators in a monitoring picture so as to judge whether the operators exist in the monitoring picture, determining specific position information and posture key point positioning information of the operators in an image, and analyzing the dangerous behaviors according to the position information and the posture key point positioning information of the operators, so as to identify whether the current action posture of the operators is the dangerous behaviors, and thus, the dangerous behaviors of the operators are accurately and efficiently detected and identified. Moreover, when dangerous behaviors occur to the operating personnel, the alarm can be timely made aiming at the dangerous behaviors, so that malignant accidents can be effectively prevented, the problems that actions of the dangerous behaviors cannot be automatically and accurately detected and the alarm cannot be made to the dangerous behaviors in the prior art can be solved, the personal safety of the operating personnel can be effectively guaranteed, the monitoring personnel in a factory can be liberated from heavy monitoring and patrolling work, and the efficiency of safety management of the production factory is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
Fig. 1 is a flowchart of a method for detecting dangerous behaviors of a worker based on vision according to embodiment 1 of the present invention;
fig. 2 is a diagram of an openpos network structure according to embodiment 1 of the present invention;
fig. 3 is a schematic distribution diagram of key points of human body posture according to embodiment 1 of the present invention;
fig. 4 is a block diagram of a system for detecting dangerous behavior of a worker based on vision according to embodiment 2 of the present invention.
Description of reference numerals:
0-nose; 1-neck; 2-left shoulder; 3-left elbow; 4-left wrist; 5-right shoulder; 6-right elbow; 7-right wrist; 8-left waist; 9-left knee; 10-left foot; 11-right waist; 12-right knee; 13-right foot; 14-left eye; 15-right eye; 16-left ear; 17-right ear.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a vision-based method and a vision-based system for detecting dangerous behaviors of operators, which can effectively detect the dangerous behaviors of the operators and give early warning to the dangerous behaviors in time, and solve the problems that the dangerous behaviors of the operators cannot be effectively detected and the dangerous behaviors cannot be early warned in time in the prior art.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Example 1
As shown in fig. 1, the present embodiment provides a method for detecting dangerous behaviors of an operator based on vision, which specifically includes the following steps:
and step S1, acquiring real-time video monitoring data of the production area to obtain video streams.
The embodiment adopts the binocular cameras installed at all corners of a production plant area to acquire the video monitoring data of the production area in real time to obtain video streams.
And step S2, intercepting monitoring pictures from the video stream at a preset frequency, and preprocessing the monitoring pictures. The method specifically comprises the following steps:
and S2.1, cutting the monitoring picture to meet the requirement of the model on the size of the input picture to obtain the cut monitoring picture.
In this embodiment, the YOLOV5 deep network model is used to perform human target detection on the operator in the monitoring picture, so that when the monitoring picture is cut, the monitoring picture is cut according to the size requirement of the input picture of the YOLOV5 deep network model, so that the cut monitoring picture is easily input into the YOLOV5 deep network model, thereby increasing the speed of the YOLOV5 deep network model for performing human target detection, and further increasing the overall speed and efficiency of detecting and identifying dangerous behaviors.
And S2.2, carrying out noise reduction and filtering processing on the cut monitoring picture to obtain a preprocessed monitoring picture.
In the embodiment, the monitoring picture after being cut is subjected to noise reduction, filtering and other processing, so that interferences such as clutter and the like on the monitoring picture can be effectively eliminated, and a clearer and higher-quality image is obtained, thereby improving the precision of human body target detection and human body posture key point positioning, and further improving the accuracy and reliability of a dangerous behavior detection result.
In this embodiment, the monitoring picture may be captured from the video stream once every other preset frame in the video stream, or the monitoring picture may be captured at a fixed time interval, for example, the monitoring picture is captured once every two minutes, or the monitoring picture is captured once every 30 seconds. It is easy to understand that the specific frequency of intercepting the monitoring picture is not unique and fixed, and can be determined by self according to the actual flow rate of people.
And step S3, detecting the human body target of the operator in the monitoring picture to obtain the position information of each operator.
In this embodiment, a YOLOV5 deep network model is used to perform human body target detection on an operator in a monitored picture, determine whether the operator exists in the current monitored picture, and if the operator exists in the current monitored picture, obtain position information of the operator; if no operator exists in the current monitoring picture, the dangerous behavior detection does not need to be performed on the current monitoring picture, and at this time, the process returns to step S2 to continue to intercept the next monitoring picture.
The YOLOV5 deep network model comprises a backhaul layer, a Neck layer and a Head layer.
The backhaul layer is a Backbone network and is used for aggregating and forming image features on different image fine granularities; the Backbone layer includes Focus structures and CSPNet structures for integrating gradient variations into image signatures.
The Neck layer is used for mixing and combining image features and transmitting the image features to the Head layer; the Neck layer comprises an FPN structure and a PAN structure.
The Head layer is used for predicting image characteristics, generating a boundary frame and a prediction category to obtain a prediction frame of the operator, and the prediction frame is a target detection result of the operator.
The format of the output target detection result of the YOLOV5 deep network model is (x, y, w, h, c); wherein x and y respectively represent the coordinates of the prediction frame of the operator on the x axis and the y axis of the monitoring picture coordinate system; w and h represent the width and height of the monitored picture, respectively, and c represents the confidence.
The input of the YOLOV5 deep network model is a monitoring picture, the output is a visual target detection, and the visual target detection result is that a prediction frame and a confidence coefficient corresponding to an operator are marked at each operator position in the monitoring picture. In practical application, each binocular camera in a production area acquires monitoring images at different angles, coordinate information of each object in the images acquired by each binocular camera at a fixed angle is known and is determined in advance through a modeling mode, and a prediction frame corresponding to an operator is marked in a monitoring picture through a YOLOV5 depth network model, so that specific position coordinate information of the operator is determined, and the relative position relationship between the operator and other objects can be judged.
The training process of the YOLOV5 deep network model in this embodiment includes:
firstly, acquiring a large number of pedestrian pictures in a factory to produce a data set; then, calibrating the data set by using a labelImge labeling tool, and putting the labeled data set into a folder; then placing the xml file generated by the labelImge labeling tool into the folder; then, a data set is constructed and the category name in the data set is marked; and finally, based on the open source network framework, starting training after parameters such as the number of network layers, the iteration times and the like are configured, stopping training after the maximum iteration times are reached, obtaining trained weights, and finishing training at the moment.
S4, positioning key points of human body postures of the monitored picture after human body target detection; the human body posture key point positioning determines the current behavior posture of the human body in a mode of positioning a plurality of human body key points.
In this embodiment, openpos is used to perform human body posture key point positioning on a monitored picture after human body target detection. The structure of OpenPose is shown in fig. 2, and the OpenPose includes a first branch structure and a second branch structure. The first branch structure is used for predicting the confidence coefficient of the extracted key points of the human posture; the second branch structure is used for coding and analyzing the association degree between each joint of the operator to obtain an affinity vector, and the confidence coefficient and the affinity vector of the human posture key point are subjected to speculation and analysis to realize the positioning of the human posture key point.
When the human body posture key point is positioned, a group of characteristics F is generated by the front 10-layer network of VGG19 of the input preprocessed picture as the input of the first stage of each branch structure; the first branch structure is used for predicting the confidence coefficient S of each extracted human body posture key point; the second branch structure is used for coding and analyzing the association degree between the joints to obtain an affinity vector L, and then the affinity domain of the confidence mapping is speculatively analyzed to cluster key points of the human posture, so that the assembly of the skeleton is realized; wherein, S represents the confidence coefficient of the key point of the human posture, and S is (S)1,S2,SJ),Sj∈Rw×hJ belongs to {1, …, J }, and J confidence maps are calculated in total by predicting the position of each human posture key point J times; l ═ L (L)1,L2,…,LC),LC∈Rw×h×2C e {1, …, C }, C representing the logarithm of the joints to be detected, resulting in C vector fields for each limb joint.
As shown in fig. 3, the key points of the human posture in the present embodiment include 18 joint points in total, such as nose 0, neck 1, left shoulder 2, left elbow 3, left wrist 4, right shoulder 5, right elbow 6, right wrist 7, left waist 8, left knee 9, left foot 10, right waist 11, right knee 12, right foot 13, left eye 14, right eye 15, left ear 16, and right ear 17.
The 18 joint points selected in the embodiment are only representative and important joint points selected from a plurality of joints of a human body, and the more important joints are used as key points of the posture of the human body to detect the posture behavior of the human body, so that the detection result is more accurate and reliable.
It should be noted that the 18 joint points are only one group of preferred joint points, and in practical application, other joint positions can be selected as key points of the posture of the human body, and can be set according to practical situations.
According to the invention, the YOLOV5 deep network model and the OpenPose open source structure are combined, namely, the pedestrian human body target detection and the human body posture key point estimation algorithm are combined, whether the current posture of the plant area operating personnel is in a safe state or not is detected, and the dangerous behavior alarm is carried out, so that the occurrence of a malignant accident is prevented, the plant area monitoring personnel are liberated from heavy monitoring and patrolling work, and the efficiency of the safety management of the production plant area is greatly improved.
And step S5, performing dangerous behavior analysis on each operator in the monitoring picture according to the position information and the human body posture key point positioning information to obtain a dangerous behavior analysis result. The method specifically comprises the following steps:
s5.1, setting a dangerous behavior sample library; the dangerous behavior sample library comprises a plurality of dangerous behavior samples, wherein the dangerous behavior samples comprise sample data of dangerous behavior categories such as falling or falling, mobile phone playing, illegal leaning, climbing, personnel gathering and the like.
It is easy to understand that the dangerous behavior categories in this embodiment include not only a few types such as falling or falling, playing a mobile phone, leaning against in violation, climbing, and gathering people, but also other dangerous behavior categories such as mechanical collision, falling of heavy objects, and the like, and can be set by oneself according to the actual factory conditions.
S5.2, identifying the current action posture and the surrounding environment information of the operator according to the position information and the human body posture key point positioning information; the surrounding environment information is used for judging whether a leaning object or a climbing object exists around the operator or not, and is used for assisting in judging the specific category of dangerous behaviors.
And S5.3, comparing the current action posture and the surrounding environment information of the operator with the dangerous behavior samples in the sample library to determine whether the current action posture of the operator is consistent with each dangerous behavior category in the dangerous behavior sample library or not, and obtaining a dangerous behavior analysis result.
When the dangerous behavior sample library is arranged, a large number of dangerous behavior pictures in a factory can be collected to produce a data set, wherein the data set comprises various dangerous behavior samples of different types, such as people falling down or falling down, people playing mobile phones, people leaning against in violation, people climbing, people gathering and the like; and then classifying the dangerous behavior samples of different categories in the data set by adopting a support vector machine algorithm to form a factory specific dangerous behavior sample library. And comparing the detected current action posture and the ambient environment information of the operator with the dangerous behavior samples in the sample library in the dangerous behavior sample library to determine which dangerous behavior category the current action posture of the operator belongs to in the dangerous behavior sample library so as to obtain a dangerous behavior analysis result.
And step S6, judging whether to activate an alarm or not according to the dangerous behavior analysis result. The method specifically comprises the following steps:
when dangerous behaviors occur, the alarm is immediately activated, and the dangerous behavior category and position information of the operating personnel is sent to the processor or directly sent to a mobile terminal such as a mobile phone of a production area responsible person or a workshop safety manager, so that the operating personnel can know the specific category and the specific position of the dangerous behaviors of the operating personnel and can be informed to timely process the dangerous behaviors.
When no dangerous behavior occurs, the alarm is not activated, and step S7 is executed to continue to intercept the next picture and perform dangerous behavior detection on the next picture.
And S7, jumping to the step S2, continuously capturing the next picture, repeating the steps S2-S6, and detecting dangerous behaviors.
The method comprises the steps of acquiring video streams in real time through binocular cameras arranged at all corners of a production factory, capturing images at intervals, and preprocessing the images; loading a trained YOLOV5 deep network model based on a deep learning server to complete detection, positioning and depth distance measurement of all operating personnel in a visual field range; and then, inputting the detected monitoring pictures of all the operating personnel into an OpenPose network, positioning key points of human body postures of all the operating personnel, and comparing the detected monitoring pictures with dangerous behavior samples in a dangerous behavior sample library to classify normal behaviors, dangerous behaviors such as falling or falling of the operating personnel, playing of mobile phones by the operating personnel, illegal leaning of the operating personnel, climbing of the operating personnel, gathering of the operating personnel and the like. If the dangerous behaviors are found, the alarm is sounded in time, information such as dangerous action types and specific positions of the operating personnel is sent to the processor or corresponding management personnel, and the processor or the corresponding management personnel is informed to take corresponding measures in time; if no dangerous event is found, dangerous behavior analysis is continuously carried out on the next frame of picture, so that malignant accidents can be effectively prevented, the problems that in the prior art, dangerous behavior actions cannot be automatically and accurately detected, and alarming on dangerous behaviors cannot be carried out are solved, the personal safety of operating personnel can be effectively guaranteed, the monitoring personnel in a plant area can be liberated from heavy monitoring and patrolling work, and the efficiency of safety management of the production plant area is improved.
Example 2
As shown in fig. 4, this embodiment provides a vision-based system for detecting dangerous behaviors of an operator, where functions of modules of the system are the same as and correspond to steps of the method in embodiment 1, and the system specifically includes:
and the video stream acquisition module M1 is used for acquiring real-time video monitoring data of the production area to obtain a video stream.
And the monitoring picture intercepting and preprocessing module M2 is configured to intercept monitoring pictures from the video stream at a preset frequency and preprocess the monitoring pictures.
And the human body target detection module M3 is used for detecting the human body target of the operator in the monitoring picture to obtain the position information of the operator.
The human body posture key point positioning module M4 is used for positioning the human body posture key points of the monitored picture after human body target detection; the human body posture key point positioning determines the current behavior posture of the human body in a mode of positioning a plurality of human body key points.
And the dangerous behavior analysis module M5 is used for analyzing dangerous behaviors of each operator in the monitoring picture according to the position information and the human posture key point positioning information to obtain a dangerous behavior analysis result.
And the dangerous behavior alarm module M6 is used for judging whether to activate the alarm or not according to the dangerous behavior analysis result.
And the dangerous behavior continuous detection module M7 is used for returning to the step of intercepting the monitoring picture from the video stream at a preset frequency, preprocessing the monitoring picture, continuously intercepting the next picture and detecting the dangerous behavior.
In the present specification, the emphasis points of the embodiments are different from those of the other embodiments, and the same and similar parts among the embodiments may be referred to each other. The principle and the implementation mode of the present invention are explained by applying specific examples in the present specification, and the above descriptions of the examples are only used to help understanding the method and the core idea of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.
Claims (10)
1. A vision-based method for detecting dangerous behavior of an operator is characterized by comprising the following steps:
detecting a human body target of an operator in the monitoring picture to obtain position information of the operator;
positioning key points of human body postures of the monitored picture after human body target detection; the human body posture key point positioning determines the current behavior posture of the human body in a mode of positioning a plurality of human body key points;
according to the position information and the human body posture key point positioning information, dangerous behavior analysis is carried out on each operator in the monitoring picture to obtain a dangerous behavior analysis result;
and judging whether to activate an alarm or not according to the dangerous behavior analysis result.
2. The vision-based method of detecting dangerous behavior of a worker according to claim 1, wherein the human pose key points include left eye, right eye, nose, left ear, right ear, neck, left shoulder, right shoulder, left elbow, right elbow, left wrist, right wrist, left waist, right waist, left knee, right knee, left foot, and right foot.
3. The vision-based worker dangerous behavior detection method according to claim 1, wherein the dangerous behavior analysis is performed on each worker in the monitoring picture according to the position information and the human body posture key point positioning information to obtain a dangerous behavior analysis result, and specifically includes:
setting a dangerous behavior sample library; the dangerous behavior sample library comprises a plurality of dangerous behavior samples, wherein the dangerous behavior samples comprise sample data of dangerous behavior categories of falling or falling, mobile phone playing, illegal leaning, climbing and personnel gathering;
identifying the current action posture and the surrounding environment information of the operator according to the position information and the human body posture key point positioning information; the surrounding environment information is used for judging whether a leaning object or a climbing object exists around the operator;
and comparing the current action posture and the ambient environment information of the operator with the dangerous behavior samples in the sample library to determine whether the current action posture of the operator is consistent with each dangerous behavior category in the dangerous behavior sample library, so as to obtain a dangerous behavior analysis result.
4. The vision-based detection method for dangerous behaviors of operators according to claim 1, wherein the detecting human body targets of the operators in the monitoring picture to obtain the position information of the operators specifically comprises:
and (3) carrying out human body target detection on the operator in the monitoring picture by adopting a YOLOV5 deep network model to obtain the position information of the operator.
5. The vision-based worker dangerous behavior detection method of claim 4, wherein the YOLOV5 deep network model comprises a Backbone layer, a cock layer and a Head layer;
the backhaul layer is a Backbone network and is used for aggregating and forming image features on different image fine granularities; the Backbone layer comprises a Focus structure and a CSPNet structure, and the CSPNet structure is used for integrating gradient change into an image feature map;
the Neck layer is used for mixing and combining image features and transmitting the image features to the Head layer; the Neck layer comprises an FPN structure and a PAN structure;
the Head layer is used for predicting image characteristics, generating a boundary frame and a prediction category to obtain a prediction frame of the operator, and the prediction frame is a target detection result of the operator.
6. The vision-based method for detecting dangerous behaviors of operators according to claim 1, wherein the positioning of the key points of the human posture of the monitored picture after the human target detection specifically comprises:
positioning key points of human body postures of the monitored picture after the human body target is detected by adopting OpenPose;
the OpenPose comprises a first branch structure and a second branch structure;
the first branch structure is used for predicting the confidence coefficient of the extracted key points of the human posture;
the second branch structure is used for coding and analyzing the association degree between each joint of the operator to obtain an affinity vector, and the confidence coefficient and the affinity vector of the human posture key point are subjected to speculation and analysis to realize the positioning of the human posture key point.
7. The vision-based method for detecting dangerous behaviors of operators according to claim 1, wherein before the step of detecting human targets of operators in the monitoring picture to obtain position information of each operator, the method further comprises:
acquiring real-time video monitoring data of a production area to obtain a video stream;
and intercepting monitoring pictures from the video stream at a preset frequency, and preprocessing the monitoring pictures.
8. The vision-based method for detecting dangerous behavior of operator according to claim 7, wherein said preprocessing the monitoring picture specifically comprises:
cutting the monitoring picture to meet the requirement of the model on the size of the input picture to obtain the cut monitoring picture;
and carrying out noise reduction and filtering processing on the cut monitoring picture.
9. The vision-based method for detecting dangerous behavior of operating personnel according to claim 1, wherein the step of judging whether to activate an alarm or not according to the dangerous behavior analysis result specifically comprises:
when dangerous behaviors occur, the alarm is immediately activated, and the dangerous behavior category and the position information of the operating personnel are sent to the processor so as to process the dangerous behaviors in time.
10. A vision-based worker hazardous behavior detection system, comprising:
the human body target detection module is used for detecting the human body target of the operator in the monitoring picture to obtain the position information of the operator;
the human body posture key point positioning module is used for positioning the human body posture key points of the monitored picture after the human body target detection; the human body posture key point positioning determines the current behavior posture of the human body in a mode of positioning a plurality of human body key points;
the dangerous behavior analysis module is used for carrying out dangerous behavior analysis on each operator in the monitoring picture according to the position information and the human body posture key point positioning information to obtain a dangerous behavior analysis result;
and the dangerous behavior alarm module is used for judging whether to activate the alarm or not according to the dangerous behavior analysis result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111459039.6A CN114155601A (en) | 2021-12-02 | 2021-12-02 | Vision-based method and system for detecting dangerous behaviors of operating personnel |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111459039.6A CN114155601A (en) | 2021-12-02 | 2021-12-02 | Vision-based method and system for detecting dangerous behaviors of operating personnel |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114155601A true CN114155601A (en) | 2022-03-08 |
Family
ID=80455653
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111459039.6A Pending CN114155601A (en) | 2021-12-02 | 2021-12-02 | Vision-based method and system for detecting dangerous behaviors of operating personnel |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114155601A (en) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114724246A (en) * | 2022-04-11 | 2022-07-08 | 中国人民解放军东部战区总医院 | Dangerous behavior identification method and device |
CN114821805A (en) * | 2022-05-18 | 2022-07-29 | 湖北大学 | Dangerous behavior early warning method, device and equipment |
CN114979611A (en) * | 2022-05-19 | 2022-08-30 | 国网智能科技股份有限公司 | Binocular sensing system and method |
CN116189305A (en) * | 2023-03-09 | 2023-05-30 | 合肥市轨道交通集团有限公司 | Personnel dangerous action recognition method based on neural network model embedding |
CN116259003A (en) * | 2023-01-06 | 2023-06-13 | 苏州同企人工智能科技有限公司 | Construction category identification method and system in construction scene |
CN116740900A (en) * | 2023-08-15 | 2023-09-12 | 中铁七局集团电务工程有限公司武汉分公司 | SVM-based power construction early warning method and system |
CN116778573A (en) * | 2023-05-24 | 2023-09-19 | 深圳市旗扬特种装备技术工程有限公司 | Violence behavior detection method and device, electronic equipment and storage medium |
CN116798186A (en) * | 2023-08-21 | 2023-09-22 | 深圳市艾科维达科技有限公司 | Camera visual identification alarm device and method based on Internet of things |
CN117173795A (en) * | 2023-11-03 | 2023-12-05 | 赋之科技(深圳)有限公司 | Dangerous action detection method and terminal |
-
2021
- 2021-12-02 CN CN202111459039.6A patent/CN114155601A/en active Pending
Cited By (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114724246B (en) * | 2022-04-11 | 2024-01-30 | 中国人民解放军东部战区总医院 | Dangerous behavior identification method and device |
CN114724246A (en) * | 2022-04-11 | 2022-07-08 | 中国人民解放军东部战区总医院 | Dangerous behavior identification method and device |
CN114821805A (en) * | 2022-05-18 | 2022-07-29 | 湖北大学 | Dangerous behavior early warning method, device and equipment |
CN114979611A (en) * | 2022-05-19 | 2022-08-30 | 国网智能科技股份有限公司 | Binocular sensing system and method |
CN116259003A (en) * | 2023-01-06 | 2023-06-13 | 苏州同企人工智能科技有限公司 | Construction category identification method and system in construction scene |
CN116259003B (en) * | 2023-01-06 | 2023-11-10 | 苏州同企人工智能科技有限公司 | Construction category identification method and system in construction scene |
CN116189305A (en) * | 2023-03-09 | 2023-05-30 | 合肥市轨道交通集团有限公司 | Personnel dangerous action recognition method based on neural network model embedding |
CN116189305B (en) * | 2023-03-09 | 2023-07-18 | 合肥市轨道交通集团有限公司 | Personnel dangerous action recognition method based on neural network model embedding |
CN116778573A (en) * | 2023-05-24 | 2023-09-19 | 深圳市旗扬特种装备技术工程有限公司 | Violence behavior detection method and device, electronic equipment and storage medium |
CN116778573B (en) * | 2023-05-24 | 2024-06-11 | 深圳市旗扬特种装备技术工程有限公司 | Violence behavior detection method and device, electronic equipment and storage medium |
CN116740900A (en) * | 2023-08-15 | 2023-09-12 | 中铁七局集团电务工程有限公司武汉分公司 | SVM-based power construction early warning method and system |
CN116740900B (en) * | 2023-08-15 | 2023-10-13 | 中铁七局集团电务工程有限公司武汉分公司 | SVM-based power construction early warning method and system |
CN116798186A (en) * | 2023-08-21 | 2023-09-22 | 深圳市艾科维达科技有限公司 | Camera visual identification alarm device and method based on Internet of things |
CN116798186B (en) * | 2023-08-21 | 2023-11-03 | 深圳市艾科维达科技有限公司 | Camera visual identification alarm device and method based on Internet of things |
CN117173795A (en) * | 2023-11-03 | 2023-12-05 | 赋之科技(深圳)有限公司 | Dangerous action detection method and terminal |
CN117173795B (en) * | 2023-11-03 | 2024-02-23 | 赋之科技(深圳)有限公司 | Dangerous action detection method and terminal |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN114155601A (en) | Vision-based method and system for detecting dangerous behaviors of operating personnel | |
KR101715001B1 (en) | Display system for safety evaluation in construction sites using of wearable device, and thereof method | |
CN113726606B (en) | Abnormality detection method and apparatus, electronic device, and storage medium | |
CN110889339B (en) | Head and shoulder detection-based dangerous area grading early warning method and system | |
CN112235537A (en) | Transformer substation field operation safety early warning method | |
CN112800901A (en) | Mine personnel safety detection method based on visual perception | |
CN112163497B (en) | Construction site accident prediction method and device based on image recognition | |
CN112685812A (en) | Dynamic supervision method, device, equipment and storage medium | |
CN114155492A (en) | High-altitude operation safety belt hanging rope high-hanging low-hanging use identification method and device and electronic equipment | |
CN115797856A (en) | Intelligent construction scene safety monitoring method based on machine vision | |
CN116259002A (en) | Human body dangerous behavior analysis method based on video | |
CN115223249A (en) | Quick analysis and identification method for unsafe behaviors of underground personnel based on machine vision | |
CN112597903B (en) | Electric power personnel safety state intelligent identification method and medium based on stride measurement | |
CN112576310B (en) | Tunnel security detection method and system based on robot | |
CN117853295A (en) | Safety environmental protection emergency system based on industry interconnection and digital panorama | |
CN116665419B (en) | Intelligent fault early warning system and method based on AI analysis in power production operation | |
CN117058855A (en) | Cloud edge communication method for Internet of things | |
CN116523288A (en) | Base station constructor risk identification method and device, electronic equipment and storage medium | |
CN115346170A (en) | Intelligent monitoring method and device for gas facility area | |
KR20230121229A (en) | Occupational safety and health education system through artificial intelligence video control and method thereof | |
CN113989335A (en) | Method for automatically positioning workers in factory building | |
JP7354461B2 (en) | Monitoring system and method | |
CN117132942B (en) | Indoor personnel real-time distribution monitoring method based on region segmentation | |
CN116227849B (en) | Standardized management and early warning system for enterprise dangerous operation | |
CN115100839B (en) | Monitoring video measured data analysis safety early warning system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |