CN114694252B - Old people falling risk prediction method - Google Patents
Old people falling risk prediction method Download PDFInfo
- Publication number
- CN114694252B CN114694252B CN202210321855.9A CN202210321855A CN114694252B CN 114694252 B CN114694252 B CN 114694252B CN 202210321855 A CN202210321855 A CN 202210321855A CN 114694252 B CN114694252 B CN 114694252B
- Authority
- CN
- China
- Prior art keywords
- risk
- time
- angle
- clip
- leg
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 69
- 230000008569 process Effects 0.000 claims abstract description 24
- 230000009471 action Effects 0.000 claims abstract description 22
- 230000011218 segmentation Effects 0.000 claims abstract description 16
- 238000012502 risk assessment Methods 0.000 claims abstract description 8
- 239000012634 fragment Substances 0.000 claims abstract description 4
- 210000002414 leg Anatomy 0.000 claims description 41
- 210000003423 ankle Anatomy 0.000 claims description 33
- 210000002683 foot Anatomy 0.000 claims description 19
- 210000003127 knee Anatomy 0.000 claims description 12
- 239000013598 vector Substances 0.000 claims description 12
- 238000004364 calculation method Methods 0.000 claims description 10
- 208000006011 Stroke Diseases 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 6
- 230000000630 rising effect Effects 0.000 claims description 6
- 238000010845 search algorithm Methods 0.000 claims description 6
- 230000002123 temporal effect Effects 0.000 claims description 6
- 210000000707 wrist Anatomy 0.000 claims description 6
- 238000006243 chemical reaction Methods 0.000 claims description 3
- 238000011065 in-situ storage Methods 0.000 claims description 3
- 238000012549 training Methods 0.000 claims description 3
- 238000012360 testing method Methods 0.000 description 7
- 238000010586 diagram Methods 0.000 description 4
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 210000000544 articulatio talocruralis Anatomy 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000004069 differentiation Effects 0.000 description 1
- 238000006073 displacement reaction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000005021 gait Effects 0.000 description 1
- 210000000629 knee joint Anatomy 0.000 description 1
- 230000037230 mobility Effects 0.000 description 1
- 230000004220 muscle function Effects 0.000 description 1
- 230000001681 protective effect Effects 0.000 description 1
- 210000002027 skeletal muscle Anatomy 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 239000000725 suspension Substances 0.000 description 1
Images
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1116—Determining posture transitions
- A61B5/1117—Fall detection
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/112—Gait analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1121—Determining geometric values, e.g. centre of rotation or angular range of movement
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/103—Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
- A61B5/11—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
- A61B5/1126—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
- A61B5/1128—Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7271—Specific aspects of physiological measurement analysis
- A61B5/7275—Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Theoretical Computer Science (AREA)
- Animal Behavior & Ethology (AREA)
- Physiology (AREA)
- Public Health (AREA)
- Surgery (AREA)
- Medical Informatics (AREA)
- Heart & Thoracic Surgery (AREA)
- Veterinary Medicine (AREA)
- Pathology (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Dentistry (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Computational Linguistics (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
- Evolutionary Biology (AREA)
- Radiology & Medical Imaging (AREA)
- Psychiatry (AREA)
- Signal Processing (AREA)
- Geometry (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The invention discloses a method for predicting the falling risk of the elderly, which comprises the following steps: s1, data acquisition and pretreatment; s2, performing skeleton modeling; s3, video time segmentation based on action characteristics: dividing the video into 7 segments, labeled S 0 ~S 6 The method comprises the steps of carrying out a first treatment on the surface of the The total 6 time division points among the 7 fragments are respectively marked as T in sequence 1 ~T 6 The method comprises the steps of carrying out a first treatment on the surface of the S4, judging time risk; s5, extracting gesture features of each action stage according to frames from the video of each stage obtained by segmentation in the step S3; s6, judging the attitude risk according to the attitude characteristics obtained in the step S5; and S7, carrying out final comprehensive risk judgment according to the time risk and the gesture risk obtained in the steps S4 and S6. According to the invention, the time characteristics and the gesture characteristics of the movement process are combined, the obtained risk assessment result can reflect the real balance and movement capability, and the accuracy of the falling risk prediction result is improved.
Description
Technical Field
The invention relates to a fall risk prediction method for old people.
Background
The physical quality, skeletal muscle function and other aspects of the old people can be naturally weakened, the falling risk is increased, and the falling easily causes injuries such as fracture and the like. In order to discover the falling risk early and take effective protective measures, the falling risk assessment of the old has important practical significance. TUG Test, a time-to-rise walking Test, is a well-established method of predicting fall risk for elderly people today, is of great interest and has proven effectiveness.
In recent years, researchers have applied computer technology to TUG testing, i.e., to automate the TUG test. The computer automation of the TUG test can not only keep the utility of the TUG test, but also reduce the direct intervention of professional medical staff, thereby improving the popularity of the test, and effectively reducing the possible human error problem in operation.
The patent application with the application number of 201911412802.2 discloses a falling risk assessment method and system for old people, which aims at extracting gait characteristics such as single-leg suspension displacement, knee joint angle, ankle joint angle and the like in the walking process of the people and judging the falling risk of the old people.
The existing TUG automation method is limited to the timing characteristics of the TUG, ignores rich motion characteristics and gesture characteristics in the TUG process, performs characteristics and algorithms used for time segmentation, and is poor in robustness in practical application. The complete TUG action flow includes activities such as standing, walking, turning around, sitting down, etc., which can reflect the physical strength, balance level, etc. characteristics of the elderly from different angles. The 201911412802.2 application mainly focuses on extracting features of the walking stage to determine the risk of falling, which affects the accuracy of the risk determination of falling.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide the old people falling risk prediction method which integrates the time characteristics and the gesture characteristics of the movement process, and the obtained risk assessment result can reflect the real balance and movement capability and can improve the accuracy of the falling risk prediction result.
The aim of the invention is realized by the following technical scheme: a method for predicting fall risk of an elderly person, comprising the steps of:
s1, data acquisition and preprocessing, which comprises the following substeps:
s11, arranging a data acquisition environment according to a TUG standard: a chair with the height of 60cm is opposite to a flat pavement with the length of 3 m; the pixel of the image pickup device is required to be at least 720p and is fixedly arranged at the height of 1.5 m;
s12, starting shooting when the subject sits at the end, then rising up and advancing, turning back after reaching the end point of the walk, returning to the starting point, re-sitting, and stopping shooting; the resulting video pre-processing is 1280x720 pixels, 30 frames per second;
s13, taking the real falling times of the subject in the time of nearly two years as a label of sample data to form a training data set;
s14, dividing the falling risk into 3 grades: the low risk level corresponds to no fall record within 2 years, the medium risk level corresponds to 1 to 2 fall records within 2 years, and the high risk level corresponds to 3 or more fall records;
s2, performing skeleton modeling: modeling human bodies in video into a skeleton of 13 nodes by using OpenPose, using J 0 ~J 12 Representing the corresponding joint, specifically includes: j (J) 0 -head, J 1 Left shoulder, J 2 Right shoulder, J 3 Left elbow, J 4 -right elbow, J 5 -left wrist, J 6 -right wrist, J 7 Left hip, J 8 -right hip, J 9 Left knee, J 10 Right knee, J 11 Left ankle, J 12 -a right ankle;
acquiring a horizontal axis coordinate and a vertical axis coordinate on each frame of 2d image through an OpenPose frame;
reconstructing an absolute coordinate obtained by OpenPose, and constructing a relative coordinate system: in each frame of image, the center of the person, J, is found by the absolute coordinates of the nodes 1 、J 2 、J 7 、J 8 A person is framed by a rectangular outer frame with fixed proportion, the left lower corner of the outer frame is taken as the origin of coordinates, the direction of the longitudinal axis of the transverse axis is unchanged, and a new coordinate system is established;
s3, video time segmentation based on action characteristics: dividing the video into 7 segments, labeled S 0 ~S 6 The method comprises the steps of carrying out a first treatment on the surface of the The total 6 time division points among the 7 fragments are respectively marked as T in sequence 1 ~T 6 The whole video start and end time points are respectively denoted by T 0 and T7 A representation; wherein S is 0 Corresponding to sitting and S 1 Corresponding to standing up, S 2 Corresponding to forward walking, S 3 Corresponding to turning around in situ, S 4 Corresponding to reverse walking, S 5 Corresponding to turning around and sitting on site, S 6 Corresponding to the sitting;
s4, judging the time risk according to the time characteristics obtained in the step S3 in a segmentation mode, wherein the specific method comprises the following steps of: the 5 time characteristics are defined as follows:
total time: f (F) 1 =|S 1 |+|S 2 |+|S 3 |+|S 4 |+|S 5 |=T 6 -T 1
Time of the rising phase: f (F) 2 =|S 1 |=T 2 -T 1
Walking phase duration: f (F) 3 =|S 2 |+|S 4 |=T 5 +T 3 -T 4 -T 2
Turning phase time: f (F) 4 =|S 3 |=T 4 -T 3
Swivel sitting phase time: f (F) 5 =|S 5 |=T 6 -T 5
Threshold Th are set for five time features 1 、Th 2 、Th 3 、Th 4 、Th 5 The different time features have different weights for risk assessment, and a risk score weight W is set for each time feature 1 、W 2 、W 3 、W 4 、W 5 Risk score R for temporal features time :
risk score R for a temporal feature time Exceeding a preset integrated time threshold Th sum Directly recognizing that the risk rating is high without performing subsequent steps; otherwise, executing the step S5;
s5, extracting gesture features of each action stage according to frames from the video of each stage obtained by segmentation in the step S3;
s6, judging the attitude risk according to the attitude characteristics obtained in the step S5;
and S7, carrying out final comprehensive risk judgment according to the time risk and the gesture risk obtained in the steps S4 and S6.
Further, in the step S3, the specific method for video time segmentation based on the action feature is as follows:
s31, constructing the following action characteristics:
junction J 7 、J 8 、J 9 and J10 The height difference of (2) is defined as follows:
Y(J 7 )、Y(J 8 )、Y(J 9 )、Y(J 10 ) Respectively represent J 7 、J 8 、J 9 and J10 Is the ordinate of (2);
junction J 1 J of (2) 2 The horizontal distance difference of (2) is defined as:
F_clip 2 =|X(J 1 )-X(J 2 )|
X(J 1 )、X(J 2 ) Respectively indicated J 1 J of (2) 2 Is the abscissa of (2);
junction J 7 and J8 The horizontal distance difference of (2) is defined as:
F_clip 3 =|X(J 7 )-X(J 8 )|
X(J 7 )、X(J 8 ) Respectively J 7 and J8 Is the abscissa of (2);
s32, respectively calculating three action features described in S31 for each frame of image to obtain an F_clip 1 、F_clip 2 、F_clip 3 Three discrete sequences, namely, using a binary inflection point search algorithm BinSeg of an L2 loss model to match experience parameters to obtain inflection points of the sequences, and obtaining the stage division of actions according to the inflection points; the method comprises the following steps:
using F_clip 1 Sequence, adopting inflection point search algorithm to match priori knowledge to judge S 0 And S is equal to 1 Is defined by the boundary point T of (2) 1 The method comprises the steps of carrying out a first treatment on the surface of the Specifically T 1 Appear in F_clip 1 At the 1 st inflection point;
using F_clip 1 Determination S 1 And S is equal to 2 Is defined by the boundary point T of (2) 2 ;T 2 Appear in F_clip 1 At the 2 nd inflection point;
using F_clip 2 Determination S 2 And S is equal to 3 Is defined by the boundary point T of (2) 3 ;T 3 Appear in F_clip 2 At the 1 st inflection point;
using F_clip 3 Determination S 3 And S is equal to 4 Is defined by the boundary point T of (2) 4 ;T 4 Appear in F_clip 3 Is at the 3 rd inflection point;
using F_clip 2 Determination S 4 And S is equal to 5 Is defined by the boundary point T of (2) 5 ;T 5 Appear in F_clip 2 At the 4 th inflection point;
using F_clip 1 Determination S 5 And S is equal to 6 Is defined by the boundary point T of (2) 6 ;T 6 Appear in F_clip 1 At the 4 th inflection point of (c).
Further, the specific implementation method of the step S5 is as follows:
s51, lifting-center axis front-rear offset characteristic F 6 : in the standing process, the central axis deviates from the maximum angle when sitting still, and the trunk central axis is obtained by calculating the coordinates of the left shoulder, the right shoulder, the left hip and the right hip; the specific calculation method is as follows:
firstly, calculating the slope of the central axis of the trunk:
then the slope is converted into an radian system angle, which is represented by a symbol theta;
for each time t i Calculating the slope according to the aboveAnd converting into radian angle>Then F is obtained 6 The following are provided:
wherein T1 <t i <T 2 Corresponding to the moments in all standing processes; θ sit Mean offset angle for the finger sitting phase;
s52, calculating the mean angle F of the center axis deviation in the turning process 7 :
S53, calculating the maximum angle F when the central axis deviates from standing straight in the sitting process 8 :
wherein T5 <t i <T 6 ,θ stand Average deflection angle at the sitting stage;
s54, calculating the height ratio characteristic F of the walking step 9 : the horizontal movement distance between two places where the ankle of the single foot falls is called a single step length, the single step length of the two feet is averaged in the walking process, and the ratio of the average step length to the height is calculated;
obtaining a landing time point set by extremum through ankle height sequence:
left foot landing time point set:
A floor_left ={t m |T 2 <t m <T 3 or T 4 <t m <T 5 And X' (t) m ) Is minimum }
t m For a time point corresponding to the minimum value of the m-th left ankle abscissa in the absolute coordinate system obtained by openPose, X' (t) m ) Representing t m Left ankle abscissa in the absolute coordinate system of (2); removing the landing time points of the head and tail incomplete steps, and counting the average 1-step length;
right foot landing time point set:
A floor_right ={t n |T 2 <t n <T 3 or T 4 <t n <T 5 And X' (t) n ) Is minimum }
t n The n-th right ankle abscissa in the absolute coordinate system obtained for OpenPose is extremely smallTime point corresponding to the value, X' (t n ) At t n Right ankle abscissa in the absolute coordinate system of (2); removing the landing time points of the head and tail incomplete steps, and counting the average 1-step length;
the single step length of the left foot is as follows:
respectively the m+1th left ankle abscissa and the m left ankle abscissa in the absolute coordinate system;
the single step length of the right foot is as follows:
height ratio characteristic F of walking step 9 The calculation is as follows:
wherein BH represents body height;
s55, calculating left-right offset characteristic F of walking center shaft 10 In the walking process, the horizontal left-right average angle difference when the middle shaft deviates from the standing straight is calculated as follows:
s56, amplitude characteristic F of walking swing arm 11 : obtaining vector angle differences formed by the left shoulder, the left elbow, the right shoulder and the right elbow nodes, namely elbow-shoulder vectors when the backward swing is farthest and elbow-shoulder vectors when the forward swing is farthest, wherein the angle differences are the difference;
firstly, respectively calculating the swing arm slopes of a left arm and a right arm:
X'(J c )、Y'(J c ) J in absolute coordinate systems obtained for OpenPose respectively c C=1, …,4;
converting slope into radian angle alpha left_arm And alpha is right_arm ;
Calculating the swing arm angles of the left arm and the right arm in each frame to form a sequence of the swing arm angles of the left arm and the right arm, and obtaining an extremum in the sequence to obtain a maximum angle A of the front swing of the left arm af_left Maximum angle A of left arm backswing ab_left Maximum angle A of front swing of right arm af_right Maximum angle A of right arm backswing ab_right ;
Will be adjacent A af_left And A is a ab_left Aligning and differencing to obtain a single swing arm angle set A of the left arm as_left ;
Will be adjacent A af_right And A is a ab_right Aligning and differencing to obtain a single swing arm angle set A of the right arm as_right ;
Removal A as_left And A is a as_right The first and last data in both sets are then combined with the remaining set A' as_left A' as_right Based on which F is calculated 11 :
wherein αm ∈A' as_left ,α n ∈A' as_right ,|A' as_left |、|A' as_right I respectively represent the set A' as_left 、A' as_right The number of elements in (a);
s57, calculating the amplitude characteristic F of the walking swing leg 12 : obtaining vector angle difference formed by the left hip, the left knee, the right hip and the right knee nodes;
and respectively calculating the swing slope of the left leg and the right leg:
X'(J c )、Y'(J c ) J in absolute coordinate systems obtained for OpenPose respectively c C=7, …,10;
will K left_leg and Kright_leg Conversion from slope to radian left_leg And beta right_leg ;
Calculating the swing angles of the left leg and the right leg in each frame to form a sequence of the swing angles of the left leg and the right leg, and obtaining an extremum in the sequence to obtain a maximum swing angle A of the front of the left leg lf_left Maximum angle A of left leg backswing lb_left Maximum angle A of front swing of right leg lf_right Maximum angle A of rear swing of right leg lb_right ;
Will be adjacent A lf_left and Alb_left Aligning and differencing to obtain a left leg swing angle sequence A ls_left ;
Will be adjacent A lf_right and Alb_right Aligning and differencing to obtain a right leg swing angle sequence A ls_right ;
Removal of sequence A ls_left and Als_right Data of the first and the last of (a) and then in the remaining sequence A' ls_left and A'ls_right Obtaining F 12 The calculation is as follows:
wherein βm ∈A' ls_left ,β n ∈A' ls_right ,|A' ls_left |、|A' ls_right I respectively represent the sequence A' ls_left and A'ls_right The number of elements in (a).
Further, the specific implementation method of the step S6 is as follows: will gesture feature F 6 ~F 12 Inputting the attitude risk value R into a multi-layer perceptron to carry out regression prediction posture 。
Further, the specific implementation method of the step S7 is as follows: if the time risk R has been obtained in step S4 time Greater than threshold Th sum The comprehensive risk judgment directly obtains a result, and the falling risk is high; otherwise, the following determination is made:
setting the weight W by linear fitting time and Wposture Final risk R final =W time ·R time +W posture ·R posture ;
Will end risk R final And a high risk reference threshold Th danger Stroke risk reference threshold Th alarm Comparing to be higher than or equal to the high risk reference threshold Th danger The subject is considered to have a high risk of falling, being greater than or equal to the stroke risk threshold Th alarm While being below the high risk threshold Th danger The subject is considered to have a risk of falling well below the risk of falling reference threshold Th alarm The subject is considered to have a low risk of falling.
The beneficial effects of the invention are as follows: according to the method, the action video is divided into different stages, the time dimension falling risk is predicted based on the time characteristics of each stage, and preliminary falling risk judgment is carried out; then constructing gesture features at different stages based on video segmentation results, inputting the gesture features into a multi-layer perceptron to carry out regression prediction, and obtaining gesture risk prediction results of space dimensions; and finally, combining the risks of two dimensions of time and space to obtain a final falling risk prediction result. According to the invention, the time characteristics and the gesture characteristics of the movement process are combined, the obtained risk assessment result can reflect the real balance and movement capability, and the accuracy of the falling risk prediction result is improved.
Drawings
FIG. 1 is a flow chart of a method for predicting risk of falling over of an elderly person according to the present invention;
FIG. 2 is a schematic diagram of a data acquisition environment according to the present invention;
FIG. 3 is a schematic diagram of the skeletal modeling of the present invention;
FIG. 4 is a schematic diagram of natural partitioning of motion phases according to the present invention;
fig. 5 is a schematic diagram of the present invention for obtaining the phase division of the motion according to the inflection point.
Detailed Description
The technical scheme of the invention is further described below with reference to the accompanying drawings.
As shown in fig. 1, the method for predicting the fall risk of the elderly according to the present invention comprises the following steps:
s1, data acquisition and preprocessing, which comprises the following substeps:
s11, arranging a data acquisition environment according to a TUG standard: a chair with the height of 60cm is opposite to a flat pavement with the length of 3 m; the pixel of the image pickup device is required to be at least 720p and is fixedly arranged at the height of 1.5m, and the connecting line of the image pickup device and the end point of the walkway is perpendicular to the walkway, as shown in fig. 2;
s12, starting shooting when the subject sits at the end, then rising up and advancing, turning back after reaching the end point of the walk, returning to the starting point, re-sitting, and stopping shooting; the resulting video pre-processing is 1280x720 pixels, 30 frames per second;
s13, taking the real falling times of a subject in the time of nearly two years as a label of sample data to form a training data set for parameter fitting;
s14, dividing the falling risk into 3 grades: the low risk level corresponds to no fall record within 2 years, the medium risk level corresponds to 1 to 2 fall records within 2 years, and the high risk level corresponds to 3 or more fall records;
s2, performing skeleton modeling: modeling human bodies in video into a skeleton of 13 nodes by using OpenPose, using J 0 ~J 12 Representing the corresponding joint of the joint,as shown in fig. 3, the method specifically includes: j (J) 0 -head, J 1 Left shoulder, J 2 Right shoulder, J 3 Left elbow, J 4 -right elbow, J 5 -left wrist, J 6 -right wrist, J 7 Left hip, J 8 -right hip, J 9 Left knee, J 10 Right knee, J 11 Left ankle, J 12 -a right ankle;
acquiring a horizontal axis coordinate and a vertical axis coordinate on each frame of 2d image through an OpenPose frame; in the following formula, X represents a horizontal coordinate and Y represents a vertical coordinate.
In the human body movement process, the size of projection of a person in an image can be changed along with the movement, and in order to reduce the influence of the change of the size of the person on a characteristic value, the absolute coordinates obtained by OpenPose are reconstructed to construct a relative coordinate system, and the method comprises the following steps: in each frame of image, the center of the person, J, is found by the absolute coordinates of the nodes 1 、J 2 、J 7 、J 8 A person is framed by a rectangular outer frame with fixed proportion, the left lower corner of the outer frame is taken as the origin of coordinates, the direction of the longitudinal axis of the transverse axis is unchanged, and a new coordinate system is established; the unit length of the new coordinate system is scaled according to the size of the outer frame, namely, the image in the outer frame is scaled to a uniform size. The size of the outer frame is selected according to the strategy that the height (absolute value of the Y-axis coordinate difference) and the width (absolute value of the X-axis coordinate difference) are considered simultaneously, when the height-width ratio is larger than the ratio r, the outer frame is defined by taking the height as a standard, and when the height-width ratio is smaller than r, the outer frame with the fixed aspect ratio is defined by taking the width as a standard, so that the state of curling of the body can be dealt with.
The final image should have the following characteristics that the horizontal axis and the vertical axis of the image obtained by frame selection are almost occupied by the human body when the body is upright, the horizontal axis is almost occupied by the width of the body when the body is curled, and the distance between the human body and the lens does not influence the size of the occupied ratio of the human body in the image. Unless otherwise indicated, the relative coordinate system after reconstruction is applicable.
S3, video time segmentation based on action characteristics: dividing the video into 7 segments, labeled S 0 ~S 6 The method comprises the steps of carrying out a first treatment on the surface of the The total 6 time division points among the 7 fragments are respectively marked as T in sequence 1 ~T 6 The whole video start and end time points are respectively denoted by T 0 and T7 A representation; wherein S is 0 Corresponding to sitting and S 1 Corresponding to standing up, S 2 Corresponding to forward walking, S 3 Corresponding to turning around in situ, S 4 Corresponding to reverse walking, S 5 Corresponding to turning around and sitting on site, S 6 Corresponding sitting, as shown in fig. 4;
the specific method for video time segmentation based on action characteristics comprises the following steps:
s31, constructing the following action characteristics:
junction J 7 、J 8 、J 9 and J10 The height difference of (2) is defined as follows:
Y(J 7 )、Y(J 8 )、Y(J 9 )、Y(J 10 ) Respectively represent J 7 、J 8 、J 9 and J10 Is the ordinate of (2);
junction J 1 J of (2) 2 The horizontal distance difference of (2) is defined as:
F_clip 2 =|X(J 1 )-X(J 2 )|
X(J 1 )、X(J 2 ) Respectively indicated J 1 J of (2) 2 Is the abscissa of (2);
junction J 7 and J8 The horizontal distance difference of (2) is defined as:
F_clip 3 =|X(J 7 )-X(J 8 )|
X(J 7 )、X(J 8 ) Respectively J 7 and J8 Is the abscissa of (2);
s32, respectively calculating three action features described in S31 for each frame of image to obtain an F_clip 1 、F_clip 2 、F_clip 3 Three discrete sequences, namely, using a binary inflection point search algorithm BinSeg of an L2 loss model to match experience parameters to obtain inflection points of the sequences, and obtaining the stage division of actions according to the inflection points, as shown in FIG. 5; the method comprises the following steps:
using F_clip 1 Sequence, adopting inflection point search algorithm to match priori knowledge to judge S 0 And S is equal to 1 Is defined by the boundary point T of (2) 1 The method comprises the steps of carrying out a first treatment on the surface of the Specifically T 1 Appear in F_clip 1 At the 1 st inflection point;
using F_clip 1 Determination S 1 And S is equal to 2 Is defined by the boundary point T of (2) 2 ;T 2 Appear in F_clip 1 At the 2 nd inflection point;
using F_clip 2 Determination S 2 And S is equal to 3 Is defined by the boundary point T of (2) 3 ;T 3 Appear in F_clip 2 At the 1 st inflection point;
using F_clip 3 Determination S 3 And S is equal to 4 Is defined by the boundary point T of (2) 4 ;T 4 Appear in F_clip 3 Is at the 3 rd inflection point;
using F_clip 2 Determination S 4 And S is equal to 5 Is defined by the boundary point T of (2) 5 ;T 5 Appear in F_clip 2 At the 4 th inflection point;
using F_clip 1 Determination S 5 And S is equal to 6 Is defined by the boundary point T of (2) 6 ;T 6 Appear in F_clip 1 At the 4 th inflection point of (c).
S4, judging the time risk according to the time characteristics obtained in the step S3 in a segmentation mode, wherein the specific method comprises the following steps of: the 5 time characteristics are defined as follows:
total time: f (F) 1 =|S 1 |+|S 2 |+|S 3 |+|S 4 |+|S 5 |=T 6 -T 1
Time of the rising phase: f (F) 2 =|S 1 |=T 2 -T 1
Walking phase duration: f (F) 3 =|S 2 |+|S 4 |=T 5 +T 3 -T 4 -T 2
Turning phase time: f (F) 4 =|S 3 |=T 4 -T 3
Swivel sitting phase time: f (F) 5 =|S 5 |=T 6 -T 5
Threshold Th are set for five time features 1 、Th 2 、Th 3 、Th 4 、Th 5 The different time features have different weights for risk assessment, and a risk score weight W is set for each time feature 1 、W 2 、W 3 、W 4 、W 5 Risk score R for temporal features time :
the threshold and the weight in the above formula are obtained by fitting actual risk data. Risk score R for a temporal feature time Exceeding a preset integrated time threshold Th sum Directly recognizing that the risk rating is high without performing subsequent steps; otherwise, executing the step S5; by analysing the actual risk data, a threshold Th is obtained with a maximum degree of differentiation in the dataset sum 。
S5, extracting gesture features of each action stage according to frames from the video of each stage obtained by segmentation in the step S3;
the specific implementation method comprises the following steps:
s51, lifting-center axis front-rear offset characteristic F 6 : in the standing process, the central axis deviates from the maximum angle when sitting still, and the trunk central axis is obtained by calculating the coordinates of the left shoulder, the right shoulder, the left hip and the right hip; the specific calculation method is as follows:
firstly, calculating the slope of the central axis of the trunk:
then the slope is converted into an radian system angle, which is represented by a symbol theta;
for each time t i Calculating the slope according to the aboveAnd converting into radian angle>Then F is obtained 6 The following are provided:
wherein T1 <t i <T 2 Corresponds to the time (also called frame, time t) during all the erection processes i In practice, the unit is a frame, i.e., each frame has a slope calculated and then converted to an angle in radians. Because there are 30 frames per second fixed, the description of the frame sequence and the time series description are equivalent); θ sit Mean offset angle for the finger sitting phase;
s52, calculating the mean angle F of the center axis deviation in the turning process 7 :
S53, calculating the maximum angle F when the central axis deviates from standing straight in the sitting process 8 :
wherein T5 <t i <T 6 ,θ stand Average deflection angle at the sitting stage;
s54, calculating the height ratio characteristic F of the walking step 9 : the horizontal movement distance between two places where the ankle of the single foot falls is called a single step length, the single step length of the two feet is averaged in the walking process, and the ratio of the average step length to the height is calculated; in the concrete calculation, two minimum step values, namely two incomplete steps of starting and ending, are removed according to the magnitude of the numerical value. The absolute coordinate system obtained here using openPoseFor the coordinate values, X 'and Y' are used.
Obtaining a landing time point set by extremum through ankle height sequence:
left foot landing time point set:
A floor_left ={t m |T 2 <t m <T 3 or T 4 <t m <T 5 And X' (t) m ) Is minimum }
t m For a time point corresponding to the minimum value of the m-th left ankle abscissa in the absolute coordinate system obtained by openPose, X' (t) m ) Representing t m Left ankle abscissa in the absolute coordinate system of (2); removing the landing time points of the head and tail incomplete steps (when walking starts and ends, the feet are gathered together and do not accord with the definition of single step length, so that the head and tail incomplete steps are removed, and only the two time points of the head and the end are removed), and counting the average 1 step length;
right foot landing time point set:
A floor_right ={t n |T 2 <t n <T 3 or T 4 <t n <T 5 And X' (t) n ) Is minimum }
t n For a time point corresponding to the minimum value of the n-th right ankle abscissa in the absolute coordinate system obtained by openPose, X' (t) n ) At t n Right ankle abscissa in the absolute coordinate system of (2); removing the landing time points of the head and tail incomplete steps, and counting the average 1-step length;
the single step length of the left foot is as follows:
respectively the m+1th left ankle abscissa and the m left ankle abscissa in the absolute coordinate system;
the single step length of the right foot is as follows:
height ratio characteristic F of walking step 9 The calculation is as follows:
wherein BH represents body height;
s55, calculating left-right offset characteristic F of walking center shaft 10 In the walking process, the horizontal left-right average angle difference when the middle shaft deviates from the standing straight is calculated as follows:
s56, amplitude characteristic F of walking swing arm 11 : obtaining vector angle differences formed by the left shoulder, the left elbow, the right shoulder and the right elbow nodes, namely elbow-shoulder vectors when the backward swing is farthest and elbow-shoulder vectors when the forward swing is farthest, wherein the angle differences are the difference;
firstly, respectively calculating the swing arm slopes of a left arm and a right arm:
X'(J c )、Y'(J c ) J in absolute coordinate systems obtained for OpenPose respectively c C=1, …,4;
converting slope into radian angle alpha left_arm And alpha is right_arm ;
Calculating the swing arm angles of the left arm and the right arm in each frame to form a sequence of the swing arm angles of the left arm and the right arm, and obtaining an extremum in the sequence to obtain a maximum angle A of the front swing of the left arm af_left Maximum angle A of left arm backswing ab_left Maximum angle A of front swing of right arm af_right Maximum angle A of right arm backswing ab_right ;
Will be adjacent A af_left And A is a ab_left Aligning and differencing to obtain a single swing arm angle set A of the left arm as_left ;
Will be adjacent A af_right And A is a ab_right Aligning and differencing to obtain a single swing arm angle set A of the right arm as_right ;
Removal A as_left And A is a as_right The first and last data in both sets (these two non-pure walking actions, therefore removed) are then combined with the remaining set A' as_left A' as_right Based on which F is calculated 11 :
wherein αm ∈A' as_left ,α n ∈A' as_right ,|A' as_left |、|A' as_right I respectively represent the set A' as_left 、A' as_right The number of elements in (a);
s57, calculating the amplitude characteristic F of the walking swing leg 12 : obtaining vector angle difference formed by the left hip, the left knee, the right hip and the right knee nodes;
and respectively calculating the swing slope of the left leg and the right leg:
X'(J c )、Y'(J c ) Obtained separately from OpenPoseJ in absolute coordinate system c C=7, …,10;
will K left_leg and Kright_leg Conversion from slope to radian left_leg And beta right_leg ;
Calculating the swing angles of the left leg and the right leg in each frame to form a sequence of the swing angles of the left leg and the right leg, and obtaining an extremum in the sequence to obtain a maximum swing angle A of the front of the left leg lf_left Maximum angle A of left leg backswing lb_left Maximum angle A of front swing of right leg lf_right Maximum angle A of rear swing of right leg lb_right ;
Corresponding in time sequence, adjacent A lf_left and Alb_left Aligning and differencing to obtain a left leg swing angle sequence A ls_left ;
Will be adjacent A lf_right and Alb_right Aligning and differencing to obtain a right leg swing angle sequence A ls_right ;
Removal of sequence A ls_left and Als_right Data of the first and the last of (a) and then in the remaining sequence A' ls_left and A'ls_right Obtaining F 12 The calculation is as follows:
wherein βm ∈A' ls_left ,β n ∈A' ls_right ,|A' ls_left |、|A' ls_right I respectively represent the sequence A' ls_left and A'ls_right The number of elements in (a).
S6, judging the attitude risk according to the attitude characteristics obtained in the step S5; the specific implementation method comprises the following steps: step height ratio F 9 Swing arm F 11 Swing leg F 12 The larger the description of the better the mobility and balance, while the axis offset-related feature F 6 、F 7 、F 8 、F 10 Smaller indicates better balancing capability. Will gesture feature F 6 ~F 12 Input toRegression prediction is carried out in the multi-layer perceptron to obtain an attitude risk value R posture 。
S7, carrying out final comprehensive risk judgment according to the time risk and the gesture risk obtained in the steps S4 and S6; the specific implementation method comprises the following steps: if the time risk R has been obtained in step S4 time Greater than threshold Th sum The comprehensive risk judgment directly obtains a result, and the falling risk is high; otherwise, the following determination is made:
setting the weight W by linear fitting time and Wposture Final risk R final =W time ·R time +W posture ·R posture ;
Will end risk R final And a high risk reference threshold Th danger Stroke risk reference threshold Th alarm Comparing to be higher than or equal to the high risk reference threshold Th danger The subject is considered to have a high risk of falling, being greater than or equal to the stroke risk threshold Th alarm While being below the high risk threshold Th danger The subject is considered to have a risk of falling well below the risk of falling reference threshold Th alarm The subject is considered to have a low risk of falling.
Those of ordinary skill in the art will recognize that the embodiments described herein are for the purpose of aiding the reader in understanding the principles of the present invention and should be understood that the scope of the invention is not limited to such specific statements and embodiments. Those of ordinary skill in the art can make various other specific modifications and combinations from the teachings of the present disclosure without departing from the spirit thereof, and such modifications and combinations remain within the scope of the present disclosure.
Claims (5)
1. A method for predicting the risk of falling over of an elderly person, comprising the steps of:
s1, data acquisition and preprocessing, which comprises the following substeps:
s11, arranging a data acquisition environment according to a TUG standard: a chair with the height of 60cm is opposite to a flat pavement with the length of 3 m; the pixel of the image pickup device is required to be at least 720p and is fixedly arranged at the height of 1.5 m;
s12, starting shooting when the subject sits at the end, then rising up and advancing, turning back after reaching the end point of the walk, returning to the starting point, re-sitting, and stopping shooting; the resulting video pre-processing is 1280x720 pixels, 30 frames per second;
s13, taking the real falling times of the subject in the time of nearly two years as a label of sample data to form a training data set;
s14, dividing the falling risk into 3 grades: the low risk level corresponds to no fall record within 2 years, the medium risk level corresponds to 1 to 2 fall records within 2 years, and the high risk level corresponds to 3 or more fall records;
s2, performing skeleton modeling: modeling human bodies in video into a skeleton of 13 nodes by using OpenPose, using J 0 ~J 12 Representing the corresponding joint, specifically includes: j (J) 0 -head, J 1 Left shoulder, J 2 Right shoulder, J 3 Left elbow, J 4 -right elbow, J 5 -left wrist, J 6 -right wrist, J 7 Left hip, J 8 -right hip, J 9 Left knee, J 10 Right knee, J 11 Left ankle, J 12 -a right ankle;
acquiring a horizontal axis coordinate and a vertical axis coordinate on each frame of 2d image through an OpenPose frame;
reconstructing an absolute coordinate obtained by OpenPose, and constructing a relative coordinate system: in each frame of image, the center of the person, J, is found by the absolute coordinates of the nodes 1 、J 2 、J 7 、J 8 A person is framed by a rectangular outer frame with fixed proportion, the left lower corner of the outer frame is taken as the origin of coordinates, the direction of the longitudinal axis of the transverse axis is unchanged, and a new coordinate system is established;
s3, video time segmentation based on action characteristics: dividing the video into 7 segments, labeled S 0 ~S 6 The method comprises the steps of carrying out a first treatment on the surface of the The total 6 time division points among the 7 fragments are respectively marked as T in sequence 1 ~T 6 The whole video start and end time points are respectively denoted by T 0 and T7 A representation; wherein S is 0 Corresponding to sitting and S 1 Corresponding to standing up, S 2 Corresponding to forward walking, S 3 Corresponding to turning around in situ, S 4 Corresponding to reverse walking, S 5 Corresponding to turning around and sitting on site, S 6 Corresponding to the sitting;
s4, judging the time risk according to the time characteristics obtained in the step S3 in a segmentation mode, wherein the specific method comprises the following steps of: the 5 time characteristics are defined as follows:
total time: f (F) 1 =|S 1 |+|S 2 |+|S 3 |+|S 4 |+|S 5 |=T 6 -T 1
Time of the rising phase: f (F) 2 =|S 1 |=T 2 -T 1
Walking phase duration: f (F) 3 =|S 2 |+|S 4 |=T 5 +T 3 -T 4 -T 2
Turning phase time: f (F) 4 =|S 3 |=T 4 -T 3
Swivel sitting phase time: f (F) 5 =|S 5 |=T 6 -T 5
Threshold Th are set for five time features 1 、Th 2 、Th 3 、Th 4 、Th 5 The different time features have different weights for risk assessment, and a risk score weight W is set for each time feature 1 、W 2 、W 3 、W 4 、W 5 Risk score R for temporal features time :
risk score R for a temporal feature time Exceeding a preset integrated time threshold Th sum Directly recognizing that the risk rating is high without performing subsequent steps; otherwise, executing the step S5;
s5, extracting gesture features of each action stage according to frames from the video of each stage obtained by segmentation in the step S3;
s6, judging the attitude risk according to the attitude characteristics obtained in the step S5;
and S7, carrying out final comprehensive risk judgment according to the time risk and the gesture risk obtained in the steps S4 and S6.
2. The method for predicting the risk of falling over of the elderly according to claim 1, wherein in the step S3, the specific method for video time segmentation based on the motion features is as follows:
s31, constructing the following action characteristics:
junction J 7 、J 8 、J 9 and J10 The height difference of (2) is defined as follows:
Y(J 7 )、Y(J 8 )、Y(J 9 )、Y(J 10 ) Respectively represent J 7 、J 8 、J 9 and J10 Is the ordinate of (2);
junction J 1 J of (2) 2 The horizontal distance difference of (2) is defined as:
F_clip 2 =|X(J 1 )-X(J 2 )|
X(J 1 )、X(J 2 ) Respectively indicated J 1 J of (2) 2 Is the abscissa of (2);
junction J 7 and J8 The horizontal distance difference of (2) is defined as:
F_clip 3 =|X(J 7 )-X(J 8 )|
X(J 7 )、X(J 8 ) Respectively J 7 and J8 Is the abscissa of (2);
s32, respectively calculating three action features described in S31 for each frame of image to obtain an F_clip 1 、F_clip 2 、F_clip 3 Three discrete sequences, a binary inflection point search algorithm BinSeg of an L2 loss model is used, and empirical parameters are matched,obtaining inflection points of the sequence, and obtaining the stage division of actions according to the inflection points; the method comprises the following steps:
using F_clip 1 Sequence, adopting inflection point search algorithm to match priori knowledge to judge S 0 And S is equal to 1 Is defined by the boundary point T of (2) 1 The method comprises the steps of carrying out a first treatment on the surface of the Specifically T 1 Appear in F_clip 1 At the 1 st inflection point;
using F_clip 1 Determination S 1 And S is equal to 2 Is defined by the boundary point T of (2) 2 ;T 2 Appear in F_clip 1 At the 2 nd inflection point;
using F_clip 2 Determination S 2 And S is equal to 3 Is defined by the boundary point T of (2) 3 ;T 3 Appear in F_clip 2 At the 1 st inflection point;
using F_clip 3 Determination S 3 And S is equal to 4 Is defined by the boundary point T of (2) 4 ;T 4 Appear in F_clip 3 Is at the 3 rd inflection point;
using F_clip 2 Determination S 4 And S is equal to 5 Is defined by the boundary point T of (2) 5 ;T 5 Appear in F_clip 2 At the 4 th inflection point;
using F_clip 1 Determination S 5 And S is equal to 6 Is defined by the boundary point T of (2) 6 ;T 6 Appear in F_clip 1 At the 4 th inflection point of (c).
3. The method for predicting the risk of falling over of the elderly according to claim 1, wherein the specific implementation method of step S5 is as follows:
s51, lifting-center axis front-rear offset characteristic F 6 : in the standing process, the central axis deviates from the maximum angle when sitting still, and the trunk central axis is obtained by calculating the coordinates of the left shoulder, the right shoulder, the left hip and the right hip; the specific calculation method is as follows:
firstly, calculating the slope of the central axis of the trunk:
then the slope is converted into an radian system angle, which is represented by a symbol theta;
for each time t i Calculating the slope according to the aboveAnd converting into radian angle>Then F is obtained 6 The following are provided:
wherein T1 <t i <T 2 Corresponding to the moments in all standing processes; θ sit Mean offset angle for the finger sitting phase;
s52, calculating the mean angle F of the center axis deviation in the turning process 7 :
S53, calculating the maximum angle F when the central axis deviates from standing straight in the sitting process 8 :
wherein T5 <t i <T 6 ,θ stand Average deflection angle at the sitting stage;
s54, calculating the height ratio characteristic F of the walking step 9 : the horizontal movement distance between two places where the ankle of the single foot falls is called a single step length, the single step length of the two feet is averaged in the walking process, and the ratio of the average step length to the height is calculated;
obtaining a landing time point set by extremum through ankle height sequence:
left foot landing time point set:
A floor_left ={t m |T 2 <t m <T 3 or T 4 <t m <T 5 And X' (t) m ) Is minimum }
t m For a time point corresponding to the minimum value of the m-th left ankle abscissa in the absolute coordinate system obtained by openPose, X' (t) m ) Representing t m Left ankle abscissa in the absolute coordinate system of (2); removing the landing time points of the head and tail incomplete steps, and counting the average 1-step length;
right foot landing time point set:
A floor_right ={t n |T 2 <t n <T 3 or T 4 <t n <T 5 And X' (t) n ) Is minimum }
t n For a time point corresponding to the minimum value of the n-th right ankle abscissa in the absolute coordinate system obtained by openPose, X' (t) n ) At t n Right ankle abscissa in the absolute coordinate system of (2); removing the landing time points of the head and tail incomplete steps, and counting the average 1-step length;
the single step length of the left foot is as follows:
respectively the m+1th left ankle abscissa and the m left ankle abscissa in the absolute coordinate system;
the single step length of the right foot is as follows:
height ratio characteristic F of walking step 9 The calculation is as follows:
wherein BH represents body height;
s55, calculating left-right offset characteristic F of walking center shaft 10 In the walking process, the horizontal left-right average angle difference when the middle shaft deviates from the standing straight is calculated as follows:
s56, amplitude characteristic F of walking swing arm 11 : obtaining vector angle differences formed by the left shoulder, the left elbow, the right shoulder and the right elbow nodes, namely elbow-shoulder vectors when the backward swing is farthest and elbow-shoulder vectors when the forward swing is farthest, wherein the angle differences are the difference;
firstly, respectively calculating the swing arm slopes of a left arm and a right arm:
X'(J c )、Y'(J c ) J in absolute coordinate systems obtained for OpenPose respectively c C=1, …,4;
converting slope into radian angle alpha left_arm And alpha is right_arm ;
Calculating the swing arm angles of the left arm and the right arm in each frame to form a sequence of the swing arm angles of the left arm and the right arm, and obtaining an extremum in the sequence to obtain the front left armMaximum angle of swing A af_left Maximum angle A of left arm backswing ab_left Maximum angle A of front swing of right arm af_right Maximum angle A of right arm backswing ab_right ;
Will be adjacent A af_left And A is a ab_left Aligning and differencing to obtain a single swing arm angle set A of the left arm as_left ;
Will be adjacent A af_right And A is a ab_right Aligning and differencing to obtain a single swing arm angle set A of the right arm as_right ;
Removal A as_left And A is a as_right The first and last data in both sets are then combined with the remaining set A' as_left 、A' as_right Based on which F is calculated 11 :
wherein αm ∈A' as_left ,α n ∈A' as_right ,|A' as_left |、|A' as_right I respectively represent the set A' as_left 、A' as_right The number of elements in (a);
s57, calculating the amplitude characteristic F of the walking swing leg 12 : obtaining vector angle difference formed by the left hip, the left knee, the right hip and the right knee nodes;
and respectively calculating the swing slope of the left leg and the right leg:
X'(J c )、Y'(J c ) J in absolute coordinate systems obtained for OpenPose respectively c C=7, …,10;
will K left_leg and Kright_leg Conversion from slope to radian left_leg And beta right_leg ;
Calculating left in each frameThe leg swinging angles of the leg and the right leg form a sequence of the left leg swinging angles and the right leg swinging angles, and extreme values are obtained in the sequence to obtain the maximum angle A of the front swing of the left leg lf_left Maximum angle A of left leg backswing lb_left Maximum angle A of front swing of right leg lf_right Maximum angle A of rear swing of right leg lb_right ;
Will be adjacent A lf_left and Alb_left Aligning and differencing to obtain a left leg swing angle sequence A ls_left ;
Will be adjacent A lf_right and Alb_right Aligning and differencing to obtain a right leg swing angle sequence A ls_right ;
Removal of sequence A ls_left and Als_right Data of the first and the last of (a) and then in the remaining sequence A' ls_left and A'ls_right Obtaining F 12 The calculation is as follows:
wherein βm ∈A' ls_left ,β n ∈A' ls_right ,|A' ls_left |、|A' ls_right I respectively represent the sequence A' ls_left and A'ls_right The number of elements in (a).
4. A method for predicting fall risk of elderly people according to claim 3, wherein the specific implementation method of step S6 is as follows: will gesture feature F 6 ~F 12 Inputting the attitude risk value R into a multi-layer perceptron to carry out regression prediction posture 。
5. The method for predicting the risk of falling over of the elderly according to claim 1, wherein the specific implementation method of step S7 is as follows: if the time risk R has been obtained in step S4 time Greater than threshold Th sum The comprehensive risk judgment directly obtains a result, and the falling risk is high; otherwise, the following determination is made:
setting the weight W by linear fitting time and Wposture Final risk R final =W time ·R time +W posture ·R posture ;
Will end risk R final And a high risk reference threshold Th danger Stroke risk reference threshold Th alarm Comparing to be higher than or equal to the high risk reference threshold Th danger The subject is considered to have a high risk of falling, being greater than or equal to the stroke risk threshold Th alarm While being below the high risk threshold Th danger The subject is considered to have a risk of falling well below the risk of falling reference threshold Th alarm The subject is considered to have a low risk of falling.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210321855.9A CN114694252B (en) | 2022-03-30 | 2022-03-30 | Old people falling risk prediction method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210321855.9A CN114694252B (en) | 2022-03-30 | 2022-03-30 | Old people falling risk prediction method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114694252A CN114694252A (en) | 2022-07-01 |
CN114694252B true CN114694252B (en) | 2023-04-28 |
Family
ID=82140920
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210321855.9A Active CN114694252B (en) | 2022-03-30 | 2022-03-30 | Old people falling risk prediction method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114694252B (en) |
Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105279483A (en) * | 2015-09-28 | 2016-01-27 | 华中科技大学 | Fall-down behavior real-time detection method based on depth image |
CN108629300A (en) * | 2018-04-24 | 2018-10-09 | 北京科技大学 | A kind of fall detection method |
CN109670396A (en) * | 2018-11-06 | 2019-04-23 | 华南理工大学 | A kind of interior Falls Among Old People detection method |
CN109919132A (en) * | 2019-03-22 | 2019-06-21 | 广东省智能制造研究所 | A kind of pedestrian's tumble recognition methods based on skeleton detection |
KR102035586B1 (en) * | 2018-05-17 | 2019-10-23 | 화남전자 주식회사 | Method for Automatic Finding a Triangle from Camera Images and System Therefor |
CN111582158A (en) * | 2020-05-07 | 2020-08-25 | 济南浪潮高新科技投资发展有限公司 | Tumbling detection method based on human body posture estimation |
CN112287759A (en) * | 2020-09-26 | 2021-01-29 | 浙江汉德瑞智能科技有限公司 | Tumble detection method based on key points |
CN113569793A (en) * | 2021-08-06 | 2021-10-29 | 上海亲孝行健康科技有限公司 | Fall identification method and device |
CN113887335A (en) * | 2021-09-13 | 2022-01-04 | 华南理工大学 | Fall risk real-time evaluation system and method based on multi-scale space-time hierarchical network |
-
2022
- 2022-03-30 CN CN202210321855.9A patent/CN114694252B/en active Active
Patent Citations (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105279483A (en) * | 2015-09-28 | 2016-01-27 | 华中科技大学 | Fall-down behavior real-time detection method based on depth image |
CN108629300A (en) * | 2018-04-24 | 2018-10-09 | 北京科技大学 | A kind of fall detection method |
KR102035586B1 (en) * | 2018-05-17 | 2019-10-23 | 화남전자 주식회사 | Method for Automatic Finding a Triangle from Camera Images and System Therefor |
CN109670396A (en) * | 2018-11-06 | 2019-04-23 | 华南理工大学 | A kind of interior Falls Among Old People detection method |
CN109919132A (en) * | 2019-03-22 | 2019-06-21 | 广东省智能制造研究所 | A kind of pedestrian's tumble recognition methods based on skeleton detection |
CN111582158A (en) * | 2020-05-07 | 2020-08-25 | 济南浪潮高新科技投资发展有限公司 | Tumbling detection method based on human body posture estimation |
CN112287759A (en) * | 2020-09-26 | 2021-01-29 | 浙江汉德瑞智能科技有限公司 | Tumble detection method based on key points |
CN113569793A (en) * | 2021-08-06 | 2021-10-29 | 上海亲孝行健康科技有限公司 | Fall identification method and device |
CN113887335A (en) * | 2021-09-13 | 2022-01-04 | 华南理工大学 | Fall risk real-time evaluation system and method based on multi-scale space-time hierarchical network |
Non-Patent Citations (5)
Title |
---|
Abderrazak Iazzi等.Fall Detection System-Based Posture-Recognition for Indoor Environments.《Journal of Imaging》.2021,第7卷1-24. * |
Sahar Abdelhedi等.Development of a two-threshold-based fall detection algorithm for elderly health monitoring.《2016 IEEE Tenth International Conference on Research Challenges in Information Science (RCIS)》.2016,1-5. * |
刘朗等.脑卒中患者运动功能自动化评定研究进展.《中国康复理论与实践》.2020,第26卷1028-1032. * |
姜珊.基于分类学习的住院老年人跌倒行为检测研究.《中国优秀硕士学位论文全文数据库 医药卫生科技辑》.2020,E060-310. * |
王平等.一种基于视频中人体姿态的跌倒检测方法.《现代电子技术》.2021,第44卷98-102. * |
Also Published As
Publication number | Publication date |
---|---|
CN114694252A (en) | 2022-07-01 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111144217B (en) | Motion evaluation method based on human body three-dimensional joint point detection | |
CN108597578B (en) | Human motion assessment method based on two-dimensional skeleton sequence | |
Ceseracciu et al. | Markerless analysis of front crawl swimming | |
CN104598867B (en) | A kind of human action automatic evaluation method and dancing points-scoring system | |
US10186041B2 (en) | Apparatus and method for analyzing golf motion | |
KR102514697B1 (en) | Apparatus and method for analyzing golf motion | |
Mehrizi et al. | Predicting 3-D lower back joint load in lifting: A deep pose estimation approach | |
CN107229920B (en) | Behavior identification method based on integration depth typical time warping and related correction | |
CN105664462A (en) | Auxiliary training system based on human body posture estimation algorithm | |
CN103824326B (en) | Dynamic human body three-dimensional modeling method | |
CN106815855A (en) | Based on the human body motion tracking method that production and discriminate combine | |
CN114694252B (en) | Old people falling risk prediction method | |
CN111091889A (en) | Human body form detection method based on mirror surface display, storage medium and device | |
WO2016107226A1 (en) | Image processing method and apparatus | |
CN113283373A (en) | Method for enhancing detection of limb motion parameters by depth camera | |
CN116721471A (en) | Multi-person three-dimensional attitude estimation method based on multi-view angles | |
Unzueta et al. | dependent 3d human body posing for sports legacy recovery from images and video | |
Huang et al. | An OpenPose-based System for Evaluating Rehabilitation Actions in Parkinson's Disease | |
CN117333932A (en) | Method, equipment and medium for identifying sarcopenia based on machine vision | |
CN114642867B (en) | AI coach machine system with automatic error correction function of rowing motion gesture | |
CN115530814A (en) | Child motion rehabilitation training method based on visual posture detection and computer deep learning | |
CN113255450A (en) | Human motion rhythm comparison system and method based on attitude estimation | |
CN112017211A (en) | Temporomandibular joint movement tracking method and system | |
Chinnaiah et al. | A new deliberation of embedded based assistive system for Yoga | |
CN112233769A (en) | Recovery system after suffering from illness based on data acquisition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |