CN116682175A - Workshop personnel dangerous behavior detection method under complex environment - Google Patents

Workshop personnel dangerous behavior detection method under complex environment Download PDF

Info

Publication number
CN116682175A
CN116682175A CN202310634097.0A CN202310634097A CN116682175A CN 116682175 A CN116682175 A CN 116682175A CN 202310634097 A CN202310634097 A CN 202310634097A CN 116682175 A CN116682175 A CN 116682175A
Authority
CN
China
Prior art keywords
joint
node
skeleton
human body
point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310634097.0A
Other languages
Chinese (zh)
Inventor
付亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University of Science and Technology
Original Assignee
Nanjing University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University of Science and Technology filed Critical Nanjing University of Science and Technology
Priority to CN202310634097.0A priority Critical patent/CN116682175A/en
Publication of CN116682175A publication Critical patent/CN116682175A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Medical Informatics (AREA)
  • Databases & Information Systems (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Molecular Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Image Processing (AREA)

Abstract

The invention belongs to the field of computer vision safety monitoring, in particular to a workshop personnel dangerous behavior detection method under a complex environment, which comprises the steps of collecting skeleton joint point information and preprocessing; reliability calculation is carried out, and reconstruction is carried out on the wrong skeleton joint points; constructing a human body three-dimensional space-time topological graph based on human body joint point data; training the neural network on the three-dimensional topological graph to obtain a dangerous behavior detection model; and importing the constructed human body three-dimensional space-time topological graph into a trained behavior recognition model to perform behavior detection. Compared with the prior art, the method filters the skeleton data of non-human objects such as product equipment and the like in the preprocessing of the data set, and the space-time dimension characteristics of the personnel behaviors are extracted by running the space-time graph convolution network, so that the accurate detection of the personnel behaviors is realized; and through reliability calculation, error data are identified, and human body joint points are reconstructed, so that the effect of detecting personnel behaviors in a shielding scene in a workshop is improved.

Description

Workshop personnel dangerous behavior detection method under complex environment
Technical Field
The invention belongs to the field of computer vision safety monitoring, and particularly relates to a workshop personnel dangerous behavior detection method under a complex environment.
Background
Along with the continuous development of industrialization, the intelligent and digital degree of workshop production activities is improved. At present, safety accidents frequently occur in workshops, so that personal safety of workshop personnel and production activities of factories are seriously threatened, the causes of the safety accidents are analyzed, dangerous behaviors of the workshop personnel are one of key factors causing the safety accidents, the cost of manual monitoring is high, monitoring points of the workshops are more, and supervision personnel are inevitably prevented from being carelessly missed due to visual fatigue, so that the mode of avoiding dangerous behaviors of the personnel by means of traditional monitoring is low in efficiency and limited in effect. Therefore, the intelligent detection method for researching the behaviors of workshop staff has important significance for guaranteeing the safety production of enterprises. With the rapid development of computer vision technology, the behavior detection research of workshop personnel has advanced to a certain extent, but current behavior detection still has a plurality of problems, a large number of devices or products are piled up in such a complex environment of a workshop, and when workers enter a working scene, the depth sensor can acquire noisy personnel characteristic data due to the shielding of the devices or the products, so that the performance of a behavior detection model is reduced. The method for detecting the dangerous behaviors of workshop personnel in a complex environment is provided based on a computer vision technology, and comprises the following specific contents.
Disclosure of Invention
The invention aims to provide a workshop personnel dangerous behavior detection method in a complex environment, which is based on a computer vision technology and can realize the identification of personnel dangerous behaviors in the complex workshop environment.
The technical solution for realizing the purpose of the invention is as follows: a workshop personnel dangerous behavior detection method under a complex environment comprises the following specific steps:
step 1, acquiring skeleton joint point information and preprocessing;
step 2, reliability calculation is carried out, and reconstruction is carried out on the wrong skeleton joint points;
step 3, constructing a human body three-dimensional space-time topological graph based on human body joint point data;
step 4, performing neural network training on the acquired correct joint point information through a three-dimensional topological graph to obtain a dangerous behavior detection model;
and 5, importing the human body three-dimensional space-time topological graph constructed in the step 3 into the behavior recognition model trained in the step 4, and performing behavior detection.
Compared with the prior art, the invention has the remarkable advantages that:
(1) The invention provides a behavior detection method suitable for a complex workshop environment, wherein personnel characteristic data acquired by a depth sensor in the complex environment are noisy or incomplete, and the false joint point data are improved and optimized by using a human body joint point reconstruction technology, so that the performance of a behavior detection model in a shielding scene in the workshop is improved.
(2) According to the invention, the skeleton data is utilized to perform behavior detection, so that the time for processing the data in the behavior detection process is reduced; filtering out skeleton data of non-human objects such as product equipment and the like in preprocessing of the data set, and improving the working efficiency of behavior detection; the space-time dimension characteristics of the personnel behaviors are effectively extracted by the operation space-time diagram convolution network, and the accurate detection of the personnel behaviors is realized.
Drawings
FIG. 1 is a schematic flow chart of a workshop personnel dangerous behavior detection method under a complex environment according to an embodiment of the invention;
FIG. 2 is a schematic illustration of a human skeletal joint;
FIG. 3 is a diagram of a behavior detection network;
FIG. 4 is a schematic diagram of an erroneous node search.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings.
The invention discloses a workshop personnel dangerous behavior detection method under a complex environment, which comprises the following steps of:
step 1: acquiring skeleton joint point information and preprocessing;
step 1.1, arranging depth vision sensors (Microsoft Kinect V2) at key stations of a workshop, and installing two depth sensors at each station point; collecting coordinate information of a person skeleton joint point of a workshop site;
skeletal joint information of the personnel behavioural actions is collected with a depth vision sensor (Microsoft Kinect V2) arranged at the shop. Acquisition speed of data setThe rate is 30 frames per second, wherein each frame of skeleton joint information comprises 25 skeleton joints of human body (shown in figure 2), each joint point has meaning (see table 1), and absolute coordinate of joint point i is expressed as (x) i ,y i ,z i ) I=1, 2 … 25. The coordinate origin of the coordinate system is located at the center of the Kinect infrared camera, the X-axis direction is the left direction along the irradiation direction of the camera, the Y-axis direction is the upper direction along the irradiation direction of the camera, and the Z-axis direction is the irradiation direction along the camera.
Table 1 meanings of the joints
Step 1.2, preprocessing the collected data set;
(1) Filtering of non-human skeletal joint data for product equipment and the like
The depth vision sensor sometimes can mistakenly collect the skeleton joint data (similar to the skeleton joint structure of a human body in structure) of a workshop, and the skeleton data of the object which is mistakenly identified as a human body needs to be removed, and the method comprises the following steps: the average displacement value of the coordinates of 25 nodes between adjacent frames is selected, and the calculation mode is as follows:
wherein the absolute coordinate of the node i at the t-th frame is defined as (x ti ,y ti ,z ti ) K represents an average displacement value of 25 node coordinates at the t-th frame and t+1 frame, a threshold value is set to 0.01m, and when K is smaller than this value, the data is filtered.
(2) Conversion of coordinates
The coordinates of the three-dimensional skeleton joint points in the absolute coordinate system are expressed by using a relative coordinate system, the middle part (node 1) of the buttocks is selected as the origin of the relative coordinate system for processing convenience, and all the joint points are converted, wherein the specific method is as follows:
coordinates (x) in the absolute coordinate system of the node i i ,y i ,z i ) Obtaining the coordinates (x 'of the relative coordinate system through conversion' i ,y' i ,z' i )。
Step 2: reliability calculation is carried out, and reconstruction is carried out on the wrong skeleton joint points;
acquiring coordinate information of a person skeleton joint point on a workshop site through a depth vision sensor Kinect; searching human body joint points in a tree mode; the reliability calculation is carried out on the collected joint point data by utilizing a skeleton data classification algorithm based on joint constraint conditions, so that classification of the joint point data is realized; and re-predicting the wrong skeleton joint information by using Kalman filtering, and correcting the Kalman filtering predicted joint by referring to the constraint condition of the skeleton length to realize the reconstruction of the wrong joint.
Step 2.1, searching the joint points in a tree mode according to the node coordinate information obtained in the step 1.1, and searching out key points;
firstly, searching by utilizing a tree mode according to the structural information of the joint points of a human body, and selecting a left shoulder (node 5), a right shoulder (node 9), a left hip (node 13), a right hip (node 17) and a neck (node 3) as root nodes, wherein the searching schematic diagram of the wrong joint points is shown in fig. 4, and the arrow direction represents the searching direction. Taking the left shoulder (node 5) as an example, the direction of the search is left shoulder (node 5) →left elbow (node 6) →left wrist (node 7) →left hand (node 8), and these nodes are searched one by this search. The meanings of the joints are shown in Table 1.
Step 2.2: reliability calculation, finding out an error node;
because the product or the equipment can shield the body of a worker, the joint data of the person acquired by the Kinect sensor contains some error data, and in order to process the error data, the joint data needs to be classified. The false joint data is generally judged by two aspects, wherein the first point is the shaking condition of the body joint, namely the change of the movement speed; the second point is an abnormality in joint position, such as a sudden lengthening or shortening of certain parts of the body resulting in an uncoordinated body. According to the two aspects, a skeleton data classification algorithm based on joint constraint conditions is provided, and the joint reliability is mainly calculated through two parts: (1) reliability calculations based on articulation velocity; (2) reliability calculations based on bone length.
(1) Reliability calculation based on joint movement speed:
when the Kinect sensor cannot accurately estimate the position of a certain joint point due to shielding, the key point is dithered, the dithering degree of the certain joint point can be judged by calculating the joint movement speed, and the three-dimensional coordinates (x ti ,y ti ,z ti ),(x (t+1)i ,y (t+1)i ,z (t+1)i ) The displacement S of the articulation point can be calculated by the following equation:
because the acquisition frame rate of Kinect is 30 frames per second, the time t separating each frame is 1/30 second, and the articulation velocity v is calculated by the following formula:
referring to the normal running speed of an adult to be 10km/h, the threshold value of the speed is set to be 12km/h, and if the speed of a certain articulation point is found to exceed the threshold value after the calculation of the articulation point speed, the corresponding articulation point at the moment can be judged to be abnormal articulation point data, namely the articulation point data needs to be reconstructed.
(2) Reliability calculation based on bone length:
the skeleton of the human body is equivalent to a hinge mechanism, the length of the skeleton should be constant in the movement process, and the Euclidean distance between two adjacent joints is used for representing the length of the skeleton, and the following formula is shown:
wherein i and j represent the reference numerals of human body joints and bone length i_j Representing the lengths of adjacent nodes i and j.
The distance of each bone point when the first frame of the person's whole body is tracked by the Kinect depth sensor is taken as the reference length.
Definition of total number of joints connected to joint i as S joint_num The line segment between the line segment and the connected node is f, and the difference proportion d of the length of the line segment f and the reference length in the frame t f (t) is calculated by the formula:
middle l f_std Is the reference length of line segment f, l f And (t) is the actual length of line segment f in frame t.
The bone length-based reliability of the joint point is determined by the average value of the difference ratio between all the joint line segments connected with the joint point and the reference length, as shown in the following formula:
wherein D (t) is the degree of difference of the bone length between the joint point i and the adjacent point, the bone length change threshold value between the continuous frames is set to be 30%, if the D (t) value is larger than the threshold value, the frame skeleton joint information is abnormal data, and the data needs to be reconstructed.
Step 2.3: carrying out reconstruction of the error node by using Kalman filtering on the error node data;
step 2.3.1: the error node coordinates obtained in step 2.2 are set to (X 1 ,Y 1 ,Z 1 ) Prediction is carried out by Kalman filtering, and the coordinates of the predicted joint points are as follows
Step 2.3.2: using the bone reference length obtained in step 2.2 as constraint condition to predict the joint point coordinates of step 2.3.1Adjustment is performed by using the error node as a child node, and the previous node of the error node as a parent node coordinate (X 2 ,Y 2 ,Z 2 ) Since the bone length between the estimated node position and its parent node is constant, reference is made to the bone length l of two nodes in the reliability based on the bone length s_std It is therefore estimated that the joint should be in parent node position coordinates (X 2 ,Y 2 ,Z 2 ) Is the center of a circle and the radius is l f_std Is represented by the following formula:
(X 2 -X) 2 +(Y 2 -Y) 2 +(Z 2 -Z) 2 =l f_std 2
step 2.3.3: selecting spherical and estimated joint positionThe point with the nearest Euclidean distance is used as the estimated joint position after optimization +.>I.e. first establish->And (X) 2 ,Y 2 ,Z 2 ) Is shown in the following formula:
the two formulas are solved in a combined way to obtainThe result of (2) is:
selectingSolution with minimum Euclidean distance as optimized joint position +.>
Step 3: constructing a human body three-dimensional space-time topological graph based on human body joint point data;
and (3) serially splicing the reconstructed error joint points in the step (2) and the original correct human joint points to form the human three-dimensional skeleton space-time topological graph. The three-dimensional skeleton space-time topological graph consists of a joint point set and an edge set, and is shown in the following formula:
G=(V,E)
wherein G is a human body three-dimensional skeleton space-time topological graph, V is a skeleton articulation point data set under a relative coordinate system, and V= { V ti T=1, 2, …, T, i=1, 2, …, N }, T representing the T-th frame of skeleton node data, i representing the sequence number of the node (as shown in table 1), n=25 representing 25 skeleton nodes; whereas the edge set e= { E consisting of spatial and temporal edges of the skeleton S ,E T E, where E S ={v ti v tj I (i, j) E H represents skeleton edges of skeleton map with naturally connected joint points, H is a joint point pair set of human body naturally connected joint points, E T ={v ti v (t+1)i The time edge that the t frame is connected with the same skeleton joint point of the t+1st frame is the dynamic position change of the skeleton joint point in space with time change.
Step 4: performing neural network training on the acquired correct joint point information through a three-dimensional topological graph to obtain a dangerous behavior detection model;
step 4.1, defining a dangerous behavior category designed according to workshop requirements;
designing dangerous behavior categories according to workshop requirements, wherein the behaviors comprise normal behaviors and dangerous behaviors, and the normal behaviors comprise carrying and walking; dangerous activities include running, jumping, leaning against a product or device, using a communication device, smoking, entering a device range, throwing, and turning around;
step 4.2: preprocessing according to the node coordinate information obtained in the step 1.2, and setting the information acquired during model training to be correct joint point information; then constructing a human body three-dimensional topological graph for the preprocessed joint position data according to the operation of the step 3;
step 4.3: training a neural network by using the three-dimensional topological graph of the human body constructed in the step 4.2 to obtain a dangerous behavior detection model;
based on the three-dimensional space-time topological graph of the human body formed by the joint points in the step 4.2, the three-dimensional space-time topological graph is sent into a space-time graph convolution network (ST-GCN), 9 space-time graph convolution modules are stacked in front of the network, the number of output channels from the first layer to the third layer is 64, the number of output channels from the fourth layer to the sixth layer is 128, the number of output channels from the seventh layer to the ninth layer is 256, then a global average pooling layer is added at the back of the network, all the different sample features are mapped, and network parameters are reduced. Finally, the classification result is predicted by a Softmax (normalized exponential function) classifier.
In the training process, the total period of training is set to 100, the batch value is set to 64, the learning rate of the first 30 rounds is set to 0.1, the learning rate of the 31 st round to the 60 th round is set to 0.01, the learning rate of the 61 st round to the 100 th round is set to 0.001, the loss function uses a cross entropy loss function, the model optimizer uses an Adam optimizer, and the model is saved after the training is finished.
Step 5: and (3) importing the human body three-dimensional space-time topological graph constructed in the step (3) into the behavior recognition model trained in the step (4) to perform behavior detection.
And (3) constructing a three-dimensional topological graph of the human body as input, extracting the characteristics of the three-dimensional topological graph by utilizing space-time graph convolution, and loading the model trained in the step (4) to perform behavior recognition on the input. And setting the threshold value to be 0.8, and judging the behavior as dangerous behavior when the behavior detection network judges that the predefined dangerous behavior type is higher than the set threshold value.
The specific dangerous behavior detection process comprises the following steps: the depth sensor Kinect continuously collects position information of skeleton joint points for personnel, searches the coordinate points based on a tree mode, classifies the coordinate points based on the reliability of joint speed and the reliability of skeleton length, predicts the wrong joint points again by using Kalman filtering when the joint points are judged to be wrong joint points, and corrects the predicted joint points by using constraint conditions of skeleton length. And constructing a human body three-dimensional space-time topological graph by the correct joint points and the optimized error nodes, and then loading a behavior detection model to perform behavior detection so as to realize workshop personnel dangerous behavior detection in a complex environment.

Claims (7)

1. The workshop personnel dangerous behavior detection method under the complex environment is characterized by comprising the following specific steps:
step 1, acquiring skeleton joint point information and preprocessing;
step 2, reliability calculation is carried out, and reconstruction is carried out on the wrong skeleton joint points;
step 3, constructing a human body three-dimensional space-time topological graph based on human body joint point data;
step 4, performing neural network training on the acquired correct joint point information through a three-dimensional topological graph to obtain a dangerous behavior detection model;
and 5, importing the human body three-dimensional space-time topological graph constructed in the step 3 into the behavior recognition model trained in the step 4, and performing behavior detection.
2. The method for detecting dangerous behaviors of workshop personnel in a complex environment according to claim 1, wherein the specific steps of the step 1 include:
step 1.1, arranging a depth vision sensor (Kinect) at a key station of a workshop, and collecting coordinate information of a person skeleton joint point of a workshop site;
definition of absolute coordinates of the node i is expressed as (x i ,y i ,z i ) I=1, 2 … 25, the human body contains 25 skeletal joints;
step 1.2, preprocessing the collected data set;
(1) Filtering of non-human skeletal joint data for product equipment and the like
The method for removing the skeleton data of the object which is mistakenly identified as the human body comprises the following steps: the average displacement value of the coordinates of 25 nodes between adjacent frames is selected, and the calculation mode is as follows:
wherein the absolute coordinate of the node i at the t-th frame is defined as (x ti ,y ti ,z ti ) K represents an average displacement value of 25 node coordinates at the t-th frame and t+1 frame, a threshold value is set to 0.01m, and when K is smaller than this value, the data is filtered.
(2) Conversion of coordinates
The coordinates of the three-dimensional skeleton joint points in the absolute coordinate system are expressed by using a relative coordinate system, the middle part (node 1) of the buttocks is selected as the origin of the relative coordinate system for processing convenience, and all the joint points are converted, wherein the specific method is as follows:
coordinates (x) in the absolute coordinate system of the node i i ,y i ,z i ) Obtaining the coordinates (x 'of the relative coordinate system through conversion' i ,y' i ,z' i )。
3. The method for detecting dangerous behaviors of workshop personnel in a complex environment according to claim 1, wherein the specific steps of the step 2 include:
step 2.1, searching the joint points in a tree mode according to the node coordinate information obtained in the step 1.1, and searching out key points; wherein, the left shoulder (node 5), the right shoulder (node 9), the left hip (node 13), the right hip (node 17) and the neck (node 3) are taken as root nodes;
2.2, calculating reliability, and finding out an error node;
and 2.3, reconstructing the error key points by using Kalman filtering.
4. The method for detecting dangerous behavior of workshop personnel in a complex environment according to claim 1, wherein the specific steps of the step 4 include:
step 4.1, defining a dangerous behavior category designed according to workshop requirements;
step 4.2, preprocessing the node coordinate information obtained in the step 1.2, setting the obtained joint point information to be correct information, and then constructing a human body three-dimensional topological graph on the preprocessed joint position data according to the operation of the step 3;
step 4.3, training a neural network by using the three-dimensional topological graph of the human body constructed in the step 4.2 to obtain a dangerous behavior detection model;
sending a human body three-dimensional topological graph into a space-time diagram convolution network, stacking 9 space-time diagram convolution modules in front of the network, stacking 9 diagram convolution blocks in front of the network, performing space diagram convolution and time convolution on each diagram convolution block to extract behavior characteristics, wherein the number of output channels from a first layer to a third layer is 64, the number of output channels from a fourth layer to a sixth layer is 128, the number of output channels from a seventh layer to a ninth layer is 256, adding a global average pooling layer behind the network, mapping all different sample characteristics, and reducing network parameters; and finally, predicting a classification result through a normalized exponential function classifier.
5. The method for detecting dangerous behavior of workshop personnel in a complex environment according to claim 3, wherein the reliability calculation of step 2.2 includes two forms:
(1) Reliability calculation based on joint movement speed;
setting three-dimensional coordinates (x) of the node i at the t-th frame and the t+1th frame ti ,y ti ,z ti ),(x (t+1)i ,y (t+1)i ,z (t+1)i ) The displacement amount S of the articulation point is calculated by the following equation:
because the acquisition frame rate of Kinect is 30 frames per second, the time t separating each frame is 1/30 second, and the articulation velocity v is calculated by the following formula:
referring to the normal running speed of an adult as 10km/h, setting a threshold value of the speed as 12km/h, and judging a corresponding articulation point at the moment as abnormal articulation point data if the speed of the articulation point exceeds the threshold value after the calculation of the articulation point speed, namely, reconstructing the articulation point data;
(2) A reliability calculation based on bone length;
the skeleton of the human body is equivalent to a hinge mechanism, the length of the skeleton should be constant in the movement process, and the Euclidean distance between two adjacent joints is used for representing the length of the skeleton, and the following formula is shown:
wherein i and j represent the reference numerals of human body joints and bone length i_j Representing the lengths of adjacent nodes i and j.
Taking the distance of each bone point when the whole body of the first frame of personnel is tracked by the Kinect depth sensor as a reference length;
definition of total number of joints connected to joint i as S joint_num The line segment between the line segment and the connected node is f, and the difference proportion d of the length of the line segment f and the reference length in the frame t f (t) is calculated by the formula:
middle l f_std Is the reference length of line segment f, l f And (t) is the actual length of line segment f in frame t.
The bone length-based reliability of the joint point is determined by the average value of the difference ratio between all the joint line segments connected with the joint point and the reference length, as shown in the following formula:
wherein D (t) is the degree of difference of the bone length between the joint point i and the adjacent point, the bone length change threshold value between the continuous frames is set to be 30%, if the D (t) value is larger than the threshold value, the frame skeleton joint information is abnormal data, and the data needs to be reconstructed.
6. The method for detecting dangerous behaviors of workshop personnel in a complex environment according to claim 3, wherein the specific steps of the step 2.3 are as follows:
step 2.3.1, defining the wrong node position (X 1 ,Y 1 ,Z 1 ) Predicted joint point location
Step 2.3.2, using the bone reference length obtained in step 2.2 as constraint condition to coordinate the joint point predicted in step 2.3.1Adjusting; the error node is used as a child node, and the node immediately preceding the error node is used as a parent node (X 2 ,Y 2 ,Z 2 ) Since the bone length between the estimated node position and its parent node is constant, reference is made to the bone length l of two nodes in the reliability based on the bone length s_std It is therefore estimated that the joint should be at the parent node position (X 2 ,Y 2 ,Z 2 ) Is the center of a circle and the radius is l f_std Is represented by the following formula:
(X 2 -X) 2 +(Y 2 -Y) 2 +(Z 2 -Z) 2 =l f_std 2
step 2.3.3, selecting the joint position on the sphere and estimationThe point with the nearest Euclidean distance is used as the estimated joint position after optimization +.>
7. The method for detecting dangerous behaviors of workshop personnel in a complex environment according to claim 1, wherein the reconstructed error joint points in the step 2 and the original correct human joint points are connected in series to form a human three-dimensional skeleton space-time topological graph; the three-dimensional skeleton space-time topological graph consists of a joint point set and an edge set, and is shown in the following formula:
G=(V,E)
wherein G is human three-dimensional boneFrame space-time topological graph, V is a skeleton joint point set data set under a relative coordinate system, and V= { V ti T=1, 2, …, T, i=1, 2, …, N }, T representing the y-th frame of skeleton node data, i representing the sequence number of the node (as shown in table 1), n=25 representing 25 skeleton nodes; whereas the edge set e= { E consisting of spatial and temporal edges of the skeleton S ,E T E, where E S ={v ti v tj I (i, j) E H represents skeleton edges of skeleton map with naturally connected joint points, H is a joint point pair set of human body naturally connected joint points, E T ={v ti v (t+1)i The time edge that the t frame is connected with the same skeleton joint point of the t+1st frame is the dynamic position change of the skeleton joint point in space with time change.
CN202310634097.0A 2023-05-31 2023-05-31 Workshop personnel dangerous behavior detection method under complex environment Pending CN116682175A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310634097.0A CN116682175A (en) 2023-05-31 2023-05-31 Workshop personnel dangerous behavior detection method under complex environment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310634097.0A CN116682175A (en) 2023-05-31 2023-05-31 Workshop personnel dangerous behavior detection method under complex environment

Publications (1)

Publication Number Publication Date
CN116682175A true CN116682175A (en) 2023-09-01

Family

ID=87784795

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310634097.0A Pending CN116682175A (en) 2023-05-31 2023-05-31 Workshop personnel dangerous behavior detection method under complex environment

Country Status (1)

Country Link
CN (1) CN116682175A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117253171A (en) * 2023-09-27 2023-12-19 智点恒创(苏州)智能科技有限公司 Risk behavior identification method and system
CN117783793A (en) * 2024-02-23 2024-03-29 泸州老窖股份有限公司 Fault monitoring method and system for switch cabinet

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117253171A (en) * 2023-09-27 2023-12-19 智点恒创(苏州)智能科技有限公司 Risk behavior identification method and system
CN117253171B (en) * 2023-09-27 2024-03-22 智点恒创(苏州)智能科技有限公司 Risk behavior identification method and system
CN117783793A (en) * 2024-02-23 2024-03-29 泸州老窖股份有限公司 Fault monitoring method and system for switch cabinet
CN117783793B (en) * 2024-02-23 2024-05-07 泸州老窖股份有限公司 Fault monitoring method and system for switch cabinet

Similar Documents

Publication Publication Date Title
CN116682175A (en) Workshop personnel dangerous behavior detection method under complex environment
CN107941537B (en) A kind of mechanical equipment health state evaluation method
CN110425005B (en) Safety monitoring and early warning method for man-machine interaction behavior of belt transport personnel under mine
Wu et al. A method of vehicle classification using models and neural networks
CN108509897A (en) A kind of human posture recognition method and system
CN107397658B (en) Multi-scale full-convolution network and visual blind guiding method and device
CN112699771B (en) Abnormal behavior detection method based on human body posture prediction
CN111553229B (en) Worker action identification method and device based on three-dimensional skeleton and LSTM
JPH0285975A (en) Pattern data processor, process measuring information processor, image processor and image recognition device
CN114582030A (en) Behavior recognition method based on service robot
CN108764541B (en) Wind energy prediction method combining space characteristic and error processing
CN115471865A (en) Operation site digital safety control method, device, equipment and storage medium
CN115311241A (en) Coal mine down-hole person detection method based on image fusion and feature enhancement
CN113807951A (en) Transaction data trend prediction method and system based on deep learning
CN113269076A (en) Violent behavior detection system and detection method based on distributed monitoring
CN112818942B (en) Pedestrian action recognition method and system in vehicle driving process
CN112817955B (en) Regression model-based data cleaning method
CN111380687B (en) Industrial motor bearing fault diagnosis method based on multi-local model decision fusion
CN117612249A (en) Underground miner dangerous behavior identification method and device based on improved OpenPose algorithm
CN117351298A (en) Mine operation vehicle detection method and system based on deep learning
CN117423157A (en) Mine abnormal video action understanding method combining migration learning and regional invasion
CN115879619A (en) Method and system for predicting day-ahead carbon emission factor of transformer substation
CN114067360A (en) Pedestrian attribute detection method and device
CN108897640B (en) System and method for detecting error position data in crowd sensing
CN114936203B (en) Method based on time sequence data and business data fusion analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination