CN113378809B - Tumbling detection method and system based on monocular three-dimensional human body posture - Google Patents

Tumbling detection method and system based on monocular three-dimensional human body posture Download PDF

Info

Publication number
CN113378809B
CN113378809B CN202110937135.0A CN202110937135A CN113378809B CN 113378809 B CN113378809 B CN 113378809B CN 202110937135 A CN202110937135 A CN 202110937135A CN 113378809 B CN113378809 B CN 113378809B
Authority
CN
China
Prior art keywords
human body
joint
judging
dimensional
ankle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110937135.0A
Other languages
Chinese (zh)
Other versions
CN113378809A (en
Inventor
祝敏航
徐晓刚
朱岳江
王军
李玲
刘雪莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Lab
Original Assignee
Zhejiang Lab
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Lab filed Critical Zhejiang Lab
Priority to CN202110937135.0A priority Critical patent/CN113378809B/en
Publication of CN113378809A publication Critical patent/CN113378809A/en
Application granted granted Critical
Publication of CN113378809B publication Critical patent/CN113378809B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • G06F2218/16Classification; Matching by matching signal segments
    • G06F2218/20Classification; Matching by matching signal segments by applying autoregressive analysis

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses a tumbling detection method based on monocular three-dimensional human body postures, which detects rectangular frames of human bodies, sofas, beds and other articles suitable for lying in scene images through a target detection module, inputs the human body images into a three-dimensional human body posture estimation module to obtain parameters of a digital human body model, wherein the parameters comprise postures of the human bodies under a camera coordinate system, postures of joints relative to father joints of the joints and three-dimensional coordinates of the joints under a human body coordinate system. And judging whether the personnel fall down or not through a decision module by utilizing the human body model parameters. The method can obtain the three-dimensional human body posture by using a single camera, does not need to acquire additional images to train and optimize the tumbling task, has low hardware cost, and can be widely applied to the fields of intelligent monitoring and the like.

Description

Tumbling detection method and system based on monocular three-dimensional human body posture
Technical Field
The invention relates to the field of computer vision, in particular to a tumble detection method and system based on a monocular three-dimensional human body posture.
Background
In the field of intelligent monitoring, the method for timely detecting abnormal behaviors of people is a basic monitoring requirement. The timely detection of the abnormal falling of the personnel can effectively guarantee the personal safety of the personnel. Common fall detection means include static detection, dynamic detection, head trajectory detection, and detection using depth information. In static detection, a common target detection algorithm is used for directly detecting whether people fall down, but a large amount of labeled data is needed, and for abnormal behaviors, the labeled data is difficult to obtain. The dynamic detection mainly detects the change of the human body posture at the moment of falling, and has higher requirements on the real-time performance of the detection, namely the equipment performance of the operation algorithm. The human head track detection needs to track the human head track, the speed, the acceleration and the like of the human head track, but the change characteristics of the human head track have limited characterization capability on the falling behavior, and the detection accuracy is low. Detection using depth information often requires multiple cameras or depth cameras, which is inconvenient for monitoring deployment.
Therefore, a tumble detection method which is based on a monocular camera only and has low requirements on labeling data, low requirements on equipment performance and high detection accuracy is needed, and the method is widely applied to the field of intelligent monitoring.
Disclosure of Invention
In order to solve the defects of the prior art, realize the purposes of convenient deployment, reduction of labeled data for training, reduction of equipment performance requirements and improvement of detection accuracy, the invention adopts the following technical scheme:
a tumble detection method based on a monocular three-dimensional human body posture comprises the following steps:
s1, acquiring a human body rectangular frame in the imagerect p And article rectangular framerect obj
S2, according to the human body rectangle framerect p Acquiring human body parameters, including: pose of human body under camera coordinate systemR c The posture of each joint relative to its father jointR i Great and three-dimensional coordinate of each joint under human body coordinate systemJ i An }, saidJ i Comprises hip joint coordinatesJ hip And ankle joint coordinatesJ ankle
S3, judging whether the person falls down according to the human body parameters, comprising the following steps:
s31, judging the ratio of the overlapping area of the human body rectangular frame and the article rectangular frame to the human body rectangular frame
Figure 377728DEST_PATH_IMAGE001
Whether or not greater than the overlap thresholdt α (0<t α <1) If yes, judging the robot to be in a non-tumbling state, and if not, entering the next step;
s32, judging the body vectorR c [:,1]To the groundNormal vectorN g Angle of (2)θ=acrcos(R c [:,1]·N g ) Whether or not it is greater than the angle thresholdt θ (0<t θ <90) If not, entering the next step; if yes, continuously judging the hip joint coordinatesJ hip And ankle joint coordinatesJ ankle Normal vector on the groundN g Distance of upper projection
Figure 498130DEST_PATH_IMAGE002
Whether or not it is less than the first projection thresholdt dng If yes, judging the falling state; if not, judging the non-tumbling state;
s33, judging hip joint coordinatesJ hip And ankle joint coordinatesJ ankle Distance projected on the ground
Figure 730398DEST_PATH_IMAGE003
Whether or not it is greater than the second projection thresholdt g If yes, judging the falling state; if not, the non-tumbling state is judged.
Under the condition that does not change current monitoring hardware equipment, through the monocular camera, fall down the detection task, require fewly to the mark data, require lowly, detect the rate of accuracy to equipment performance, can wide application in intelligent monitoring field, the control of being convenient for is deployed, reduces the control cost.
Further, the S2 includes the following steps:
s21, according to the human body rectangle framerect p To pick up human body imageI p
S22, human body imageI p Extracting human body image by characteristic codingI p Is characterized byFCharacteristic ofFThrough iterative characteristic regression, the posture of the human body under the camera coordinate system is regressedR c The posture of each joint relative to its father jointR i Human body morphologyS
S23, re-returning to the stepR c 、{R i }、STo obtain the three-dimensional coordinate of each joint under the human body coordinate systemJ i An }, saidJ i Comprises hip joint coordinatesJ hip And ankle joint coordinatesJ ankle
Furthermore, the prediction result is smoothed, and the human body is judged to fall when the continuous T-frame images predict that the human body falls, so that misjudgment caused by the fact that the human body is actively seated or laid down on the spot or sports is avoided.
A tumble detection system based on monocular three-dimensional human body postures comprises a target detection module and a three-dimensional human body posture estimation module which are sequentially connected;
the target detection module is used for acquiring a human body rectangular frame in the imagerect p And article rectangular framerect obj
The three-dimensional human body posture estimation module is based on a human body rectangular framerect p To pick up human body imageI p By human body imagesI p Acquiring the posture of the human body under the camera coordinate systemR c The posture of each joint relative to its father jointR i Human body morphologyS
By the saidR c 、{R i }、STo obtain the three-dimensional coordinate of each joint under the human body coordinate systemJ i }, including hip joint coordinatesJ hip And ankle joint coordinatesJ ankle
The decision module is used for judging whether the personnel fall down, and the decision process is as follows:
(1) judging the ratio of the overlapping area of the human body rectangular frame and the article rectangular frame to the human body rectangular frame
Figure 594448DEST_PATH_IMAGE001
Whether it is large or notAt overlap thresholdt α (0<t α <1) If yes, judging the robot to be in a non-tumbling state, and if not, entering the next step;
(2) determining trunk vectorsR c [:,1]Normal vector to groundN g Angle of (2)θ=acrcos(R c [:,1]·N g ) Whether or not it is greater than the angle thresholdt θ (0<t θ <90) If not, entering the next step; if yes, continuously judging the hip joint coordinatesJ hip And ankle joint coordinatesJ ankle Normal vector on the groundN g Distance of upper projection
Figure 612083DEST_PATH_IMAGE002
Whether or not it is less than the first projection thresholdt dng If yes, judging the falling state; if not, judging the non-tumbling state;
(3) judging hip joint coordinatesJ hip And ankle joint coordinatesJ ankle Distance projected on the ground
Figure 403189DEST_PATH_IMAGE003
Whether or not it is greater than the second projection thresholdt g If yes, judging the falling state; if not, the non-tumbling state is judged.
Further, the three-dimensional human body posture estimation module comprises a feature encoder, an iterative feature regressor and a regressor.
Further, a feature encoder for extracting a human body imageI p Is characterized byF
Further, the feature regressor is iterated, by means of the featuresFRegress the posture of the human body under the camera coordinate systemR c The posture of each joint relative to its father jointR i Human body morphologyS
Further, the regressor, pairPose of human body under camera coordinate systemR c The posture of each joint relative to its father jointR i Human body morphologySPerforming regression to obtain three-dimensional coordinates of each joint in a human body coordinate systemJ i }。
Further, the three-dimensional human body posture estimation module is a neural network model for estimating the three-dimensional human body postureHMR. Neural network model for estimating three-dimensional human body postureHMRThe obtained parameters are used for detecting the human body tumble, and the identification precision of tumble detection is improved.
The invention has the advantages and beneficial effects that:
the tumble detection method and the tumble detection system based on the monocular three-dimensional human body posture can perform tumble detection tasks under the condition of not changing the existing monitoring hardware equipment, have few requirements on labeled data, low requirements on equipment performance and high detection accuracy, can be widely applied to the field of intelligent monitoring, are convenient to monitor and deploy, and reduce the monitoring cost.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a flow chart of the operation of the decision module in the present invention.
Fig. 3 is a schematic view of a first fall state in the present invention.
Fig. 4 is a schematic view of a first non-tumbling state in accordance with the present invention.
Fig. 5 is a schematic view of a second fall state in the present invention.
Fig. 6 is a schematic view of a second non-tumbling state of the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
As shown in fig. 1, a fall detection method based on a monocular three-dimensional human body posture includes the following steps:
the method comprises the following steps: using a target detection module (using the YOLOv5 detection algorithm)Obtaining a human body rectangular frame in a scene imagerect p And rectangular frames of sofa, bed and other articles suitable for lyingrect obj
Step two: image human bodyI p Input three-dimensional human body posture estimation moduleHMRAnd obtaining parameters of the digital human body model, wherein a feature encoder of the estimation module adopts Resnet50, an iterative feature regressor adopts 5 layers of full connection layers, and the iteration is carried out for 3 times, and the same weight is adopted in each iteration. Human body parametric model obtained by estimationSMPLSpecifically including the posture of the human body under the camera coordinate systemR c The posture of each joint relative to its father jointR i Great and three-dimensional coordinate of each joint under human body coordinate systemJ i The method comprises the following steps:
(2.1) Using the human body rectangular frame obtained in the first steprect p Picking up human body image in scene imageI p
(2.2) neural network model for three-dimensional human body posture estimationHMRInputting human body imagesI p Extracting human body images by a feature encoderI p Is characterized byFCharacteristic ofFBy means of iterative characteristic regressor, the digitalized human body parametric model is regressedSMPLThe parameters of (2), including: pose of human body under camera coordinate systemR c The posture of each joint relative to its father jointR i Human body morphologyS
(2.3) parameterizing the model by quantizing the human bodySMPLThe regressor regresses the parameters of the human parametric model to obtain the three-dimensional coordinate of each joint under the human coordinate systemJ i }, including hip joint coordinatesJ hip And ankle joint coordinatesJ ankle
Step three: inputting the human body model parameters into a decision module to judge whether the personnel fall down, and the specific substeps are as follows:
(3.1) initialization stage, wherein the person walks vertically to obtain the person in the vertical statePose of body under camera coordinate systemR c0 Taking the torso vector at that time, i.e.R c0 Y component ofR c0 [:,1]Represents the normal vector of the groundN g
(3.2) the fall detection phase flow is shown in fig. 2 and is realized by the following sub-steps:
(3.2.1) judging the ratio of the overlapping area of the human body and the rectangular frame of the article to the rectangular frame of the human body
Figure 873484DEST_PATH_IMAGE001
Whether or not greater than the overlap thresholdt α =0.8(0<t α <1) If yes, judging the article to be in a non-tumbling state (sitting and lying on the corresponding article), and if not, entering the next step;
(3.2.2) determining torso vectorR c [:,1]Normal vector to groundN g Angle of (2)θ=acrcos(R c [:,1]·N g ) Whether or not greater than the threshold value of the included anglet θ =45°(0<t θ <90) If yes, continuing to judge the hip joint coordinatesJ hip And ankle joint coordinatesJ ankle Normal vector on the groundN g Distance of upper projection
Figure 259335DEST_PATH_IMAGE002
Whether or not less than a projection thresholdt dng =0.2 m (if yes, it is determined as a fall state, as shown in fig. 3; if no, it is determined as a non-fall state, as shown in fig. 4), if no, it proceeds to the next step;
(3.2.3) determining the distance projected on the ground between the hip joint and the ankle joint
Figure 131476DEST_PATH_IMAGE003
Whether or not greater thant g =0.4 m, if yes, determine to be in a fall state, as shown in fig. 5; if not, the vehicle is determined to be in a non-falling state, as shown in FIG. 6.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (9)

1. A tumble detection method based on a monocular three-dimensional human body posture is characterized by comprising the following steps:
s1, acquiring a human body rectangular frame in the imagerect p And article rectangular framerect obj
S2, according to the human body rectangle framerect p Acquiring human body parameters, including: pose of human body under camera coordinate systemR c The posture of each joint relative to its father jointR i Great and three-dimensional coordinate of each joint under human body coordinate systemJ i An }, saidJ i Comprises hip joint coordinatesJ hip And ankle joint coordinatesJ ankle
S3, judging whether the person falls down according to the human body parameters, comprising the following steps:
s31, judging the ratio of the overlapping area of the human body rectangular frame and the article rectangular frame to the human body rectangular frame
Figure 289742DEST_PATH_IMAGE001
Whether or not greater than the overlap thresholdt α (0<t α <1) If yes, judging the robot to be in a non-tumbling state, and if not, entering the next step;
s32, judging the body vectorR c [:,1]Normal vector to groundN g Angle of (2)θ=acrcos(R c [:,1]·N g ) Whether or not it is greater than the angle thresholdt θ (0<t θ <90) If not, entering the next step; if yes, continuously judging the hip joint coordinatesJ hip And ankle joint coordinatesJ ankle Normal vector on the groundN g Distance of upper projection
Figure 247333DEST_PATH_IMAGE002
Whether or not it is less than the first projection thresholdt dng If yes, judging the falling state; if not, judging the non-tumbling state;
s33, judging hip joint coordinatesJ hip And ankle joint coordinatesJ ankle Distance projected on the ground
Figure 187608DEST_PATH_IMAGE003
Whether or not it is greater than the second projection thresholdt g If yes, judging the falling state; if not, the non-tumbling state is judged.
2. The method of claim 1, wherein the step S2 comprises the steps of:
s21, according to the human body rectangle framerect p To pick up human body imageI p
S22, human body imageI p Extracting human body image by characteristic codingI p Is characterized byFCharacteristic ofFThrough iterative characteristic regression, the posture of the human body under the camera coordinate system is regressedR c The posture of each joint relative to its father jointR i Human body morphologyS
S23, re-returning to the stepR c 、{R i }、STo obtain the three-dimensional coordinate of each joint under the human body coordinate systemJ i An }, saidJ i Comprises hip joint coordinatesJ hip And ankle joint coordinatesJ ankle
3. The method of claim 1, wherein the prediction result is smoothed, and the human body is determined to be a fall only when the human body is predicted to be a fall through the T-frame images.
4. The utility model provides a fall detection system based on three-dimensional human gesture of monocular, includes the target detection module, three-dimensional human gesture estimation module and the decision-making module that connect gradually, its characterized in that:
the target detection module is used for acquiring a human body rectangular frame in the imagerect p And article rectangular framerect obj
The three-dimensional human body posture estimation module is based on a human body rectangular framerect p To pick up human body imageI p By human body imagesI p Acquiring the posture of the human body under the camera coordinate systemR c The posture of each joint relative to its father jointR i Human body morphologyS(ii) a By the saidR c 、{R i }、STo obtain the three-dimensional coordinate of each joint under the human body coordinate systemJ i }, including hip joint coordinatesJ hip And ankle joint coordinatesJ ankle
The decision module is used for judging whether the personnel fall down, and the decision process is as follows:
(1) judging the ratio of the overlapping area of the human body rectangular frame and the article rectangular frame to the human body rectangular frame
Figure 930567DEST_PATH_IMAGE001
Whether or not greater than the overlap thresholdt α (0<t α <1) If yes, the user is judged to be in a non-tumbling state, and if not, the user enters the next stepOne step;
(2) determining trunk vectorsR c [:,1]Normal vector to groundN g Angle of (2)θ=acrcos(R c [:,1]·N g ) Whether or not it is greater than the angle thresholdt θ (0<t θ <90) If not, entering the next step; if yes, continuously judging the hip joint coordinatesJ hip And ankle joint coordinatesJ ankle Normal vector on the groundN g Distance of upper projection
Figure 563673DEST_PATH_IMAGE002
Whether or not it is less than the first projection thresholdt dng If yes, judging the falling state; if not, judging the non-tumbling state;
(3) judging hip joint coordinatesJ hip And ankle joint coordinatesJ ankle Distance projected on the ground
Figure 274140DEST_PATH_IMAGE003
Whether or not it is greater than the second projection thresholdt g If yes, judging the falling state; if not, the non-tumbling state is judged.
5. The monocular three-dimensional human pose based fall detection system of claim 4, wherein the three-dimensional human pose estimation module comprises a feature encoder, an iterative feature regressor and a regressor.
6. The monocular three-dimensional human body posture based fall detection system of claim 5, wherein the feature encoder extracts a human body imageI p Is characterized byF
7. The monocular three-dimensional human body pose based fall detection system of claim 6, in particularCharacterised by iterative feature regressors, by featuresFRegress the posture of the human body under the camera coordinate systemR c The posture of each joint relative to its father jointR i Human body morphologyS
8. The monocular three-dimensional human body pose based fall detection system of claim 5, wherein the regressor is configured to determine the pose of the human body in the camera coordinate systemR c The posture of each joint relative to its father jointR i Human body morphologySPerforming regression to obtain three-dimensional coordinates of each joint in a human body coordinate systemJ i }。
9. The monocular three-dimensional human body pose based fall detection system of claim 4, wherein the three-dimensional human body pose estimation module is a neural network model for three-dimensional human body pose estimationHMR
CN202110937135.0A 2021-08-16 2021-08-16 Tumbling detection method and system based on monocular three-dimensional human body posture Active CN113378809B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110937135.0A CN113378809B (en) 2021-08-16 2021-08-16 Tumbling detection method and system based on monocular three-dimensional human body posture

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110937135.0A CN113378809B (en) 2021-08-16 2021-08-16 Tumbling detection method and system based on monocular three-dimensional human body posture

Publications (2)

Publication Number Publication Date
CN113378809A CN113378809A (en) 2021-09-10
CN113378809B true CN113378809B (en) 2021-12-14

Family

ID=77577286

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110937135.0A Active CN113378809B (en) 2021-08-16 2021-08-16 Tumbling detection method and system based on monocular three-dimensional human body posture

Country Status (1)

Country Link
CN (1) CN113378809B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116403275B (en) * 2023-03-14 2024-05-24 南京航空航天大学 Method and system for detecting personnel advancing posture in closed space based on multi-vision

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582158A (en) * 2020-05-07 2020-08-25 济南浪潮高新科技投资发展有限公司 Tumbling detection method based on human body posture estimation
CN111709277A (en) * 2020-04-29 2020-09-25 平安科技(深圳)有限公司 Human body tumbling detection method and device, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
ES2812578T3 (en) * 2011-05-13 2021-03-17 Vizrt Ag Estimating a posture based on silhouette

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709277A (en) * 2020-04-29 2020-09-25 平安科技(深圳)有限公司 Human body tumbling detection method and device, computer equipment and storage medium
CN111582158A (en) * 2020-05-07 2020-08-25 济南浪潮高新科技投资发展有限公司 Tumbling detection method based on human body posture estimation

Also Published As

Publication number Publication date
CN113378809A (en) 2021-09-10

Similar Documents

Publication Publication Date Title
CN109919132B (en) Pedestrian falling identification method based on skeleton detection
CN104616438B (en) A kind of motion detection method of yawning for fatigue driving detection
JP5001260B2 (en) Object tracking method and object tracking apparatus
CN110738154A (en) pedestrian falling detection method based on human body posture estimation
KR101035055B1 (en) System and method of tracking object using different kind camera
CN110147738B (en) Driver fatigue monitoring and early warning method and system
Hasan et al. Robust pose-based human fall detection using recurrent neural network
CN111582158A (en) Tumbling detection method based on human body posture estimation
US20120155707A1 (en) Image processing apparatus and method of processing image
CN112966628A (en) Visual angle self-adaptive multi-target tumble detection method based on graph convolution neural network
CN113378809B (en) Tumbling detection method and system based on monocular three-dimensional human body posture
CN109654676A (en) Adjusting method, device, system, computer equipment and the storage medium of air supply device
CN109492588A (en) A kind of rapid vehicle detection and classification method based on artificial intelligence
Uddin et al. A deep learning-based human activity recognition in darkness
Stone et al. Silhouette classification using pixel and voxel features for improved elder monitoring in dynamic environments
CN115116127A (en) Fall detection method based on computer vision and artificial intelligence
CN111144174A (en) System for identifying falling behavior of old people in video by using neural network and traditional algorithm
CN102156994B (en) Joint positioning method for single-view unmarked human motion tracking
CN111178201A (en) Human body sectional type tracking method based on OpenPose posture detection
Krotosky et al. Real-time stereo-based head detection using size, shape and disparity constraints
CN110674751A (en) Device and method for detecting head posture based on monocular camera
Wei et al. Center of mass estimation for balance evaluation using convolutional neural networks
Li et al. Distributed rgbd camera network for 3d human pose estimation and action recognition
CN114495150A (en) Human body tumbling detection method and system based on time sequence characteristics
Wang et al. A Single-Camera Computer Vision-Based Method for 3D L5/S1 MomentEstimation During Lifting Tasks

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant