CN108985227B - Motion description and evaluation method based on space triangular plane features - Google Patents

Motion description and evaluation method based on space triangular plane features Download PDF

Info

Publication number
CN108985227B
CN108985227B CN201810776306.4A CN201810776306A CN108985227B CN 108985227 B CN108985227 B CN 108985227B CN 201810776306 A CN201810776306 A CN 201810776306A CN 108985227 B CN108985227 B CN 108985227B
Authority
CN
China
Prior art keywords
feature
motion
weight
characteristic
static
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810776306.4A
Other languages
Chinese (zh)
Other versions
CN108985227A (en
Inventor
孟明
王子健
袁敏达
徐玉明
邹蕾蕾
陈玉平
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN201810776306.4A priority Critical patent/CN108985227B/en
Publication of CN108985227A publication Critical patent/CN108985227A/en
Application granted granted Critical
Publication of CN108985227B publication Critical patent/CN108985227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a human motion action description and evaluation method based on space triangular plane characteristics. Firstly, a spatial feature triangular plane is constructed by using human body joint points, and the relative area between the feature triangular plane and a trunk plane and the included angle between the normal vector of the feature triangular plane and the vertical direction are calculated, so that the motion evaluation feature quantity is constructed. And then carrying out self-adaptive weighting on the motion evaluation characteristic quantity based on the local action difference of the human body. And finally, obtaining a final score by using an improved DTW evaluation algorithm. The method selects the spatial triangular plane to describe the human motion characteristics, and can accurately express the human motion of different postures at different moments. By adopting the self-adaptive weighting method based on the local action difference of the human body, the important part is highlighted, the secondary part is weakened, and the motion evaluation characteristic is more reasonable. And finally, the final score is obtained by adopting an improved DTW-based evaluation algorithm, and compared with the traditional DTW algorithm, the accuracy of action matching is improved, so that the motion evaluation is more effective.

Description

Motion description and evaluation method based on space triangular plane features
Technical Field
The invention belongs to the field of computer vision, and relates to a human motion action description and evaluation method based on space triangular plane characteristics.
Background
In the application of computer vision in the fields of security, entertainment, fitness, sports, rehabilitation and the like, human body action description is a key technical link and plays an important role in identifying human body actions and finishing quality evaluation.
The description of human body actions mainly comprises the steps of acquiring data of motion through a sensor or some ways and further extracting features representing the actions. The current research on motion description methods is mainly divided into two methods, namely a wearable sensor-based method and a vision-based method. The description method based on the wearable sensor is used for representing actions by acquiring the motion trail and the rotation angle of the important part of a tester. However, more sensors are needed in the experiment process, which is very cumbersome for the user and the price of the sensors is expensive. Description methods based on vision are mainly classified into three categories: (1) the extracted features are mainly static features and dynamic features based on motion information based on action description of bottom layer tracking or posture estimation, so that the effectiveness of the extracted features depends on the accuracy of target tracking and human body posture estimation. In a real scene, the background is often disordered, the number of moving targets is large, and accurate target tracking and human body posture estimation have great challenges. Resulting in such features being not very robust; (2) the method is characterized in that the method is based on motion description of an image processing technology, dynamic features and space-time features based on optical flow are generally extracted, and the method is generally used for describing local motion of an image or a space-time cube, so that the calculated amount is large, the method is easily interfered by noise, and the consideration on the integrity of a motion behavior mode and the analysis on the globality are lacked; (3) the motion description based on the learning method is used for extracting middle-level semantic features such as objects, postures and scenes, which may be very effective for motion recognition in a specific scene, but because an artificially defined 'motion attribute space' is involved, there may be a risk that the attribute space is incomplete or inaccurate to cause degradation of motion recognition performance in a real natural scene.
Human body movement is a complex movement in which all parts of the whole body participate cooperatively, so the proposed characteristics require changing information describing all joints of the human body. The change in motion of the body may be reflected by the spatial relationship of the limb portions between the various joints. Therefore, the invention provides a method for extracting human body continuous action features based on a skeleton model acquired by a Kinect depth camera, and the method utilizes a feature triangular plane formed by human body joint points and changes thereof as an action description method.
In order to evaluate the motion similarity, it is necessary to extract spatial, temporal, and shape information features that can sufficiently characterize the motion, and to obtain an evaluation result based on the feature amount by an appropriate evaluation method. Currently available methods for exercise assessment include: template-based methods, probabilistic-based methods, and grammar-based methods. The template matching method is intuitive and simple, but lacks robustness; the problem of the probabilistic statistical method is that a large amount of training data is needed to learn model parameters, and for a generative model, for the convenience of model solution, it is generally assumed that samples are independently and identically distributed, and different observed values are mutually independent, so that the independence assumption is often inconsistent with the actual generation process of data; grammar-based methods facilitate the understanding of complex structures and the efficient use of a priori knowledge, and can generally be combined with the first two methods.
Disclosure of Invention
In order to solve the technical problems, the invention provides a motion description method based on spatial triangular plane features and a motion evaluation method based on improved DTW.
In order to achieve the above object, the method of the present invention mainly comprises the following steps:
(1) firstly, dividing a human body into three parts, namely an upper limb, a lower limb and a trunk, and constructing 9 characteristic triangular planes representing human body movement based on joint points of the human body; wherein the 9 characteristic triangular planes comprise an upper limb triangular plane S formed between the left and right upper limbs1、S2The upper trunk triangle plane S formed by the left and right elbow joints, shoulder joint and hip joint3、S4The crotch triangular plane S formed by the left knee joint, the right knee joint and the central point of the caudal vertebra5Lower limb triangular plane S formed by left and right ankle joints, knee joint and hip joint6、S7And a lower trunk triangular plane S formed by the left knee joint, the right knee joint, the left hip joint, the right hip joint and the left shoulder joint8、S9
(2) Human body trunk plane S formed by two side shoulder joints and two side hip joints10And respectively calculating the relative areas of the 9 characteristic triangular surfaces and the trunk plane as motion evaluation characteristic quantities.
(3) Calculating included angles between normal vectors of the 9 characteristic planes and the vertical direction, and taking the included angles as motion evaluation characteristic quantities;
(4) and constructing 18 f-dimensional vector human body motion descriptors of the testee based on the relative areas of the 9 characteristic triangular surfaces and the trunk plane and the included angles between the 9 normal vectors and the vertical direction, wherein f is the frame number of the motion.
(5) Weighting stationary part features and moving part features
Weighting of stationary part features; the method is divided into two cases, namely a static 0 weight characteristic and a static weighted characteristic. And judging the feature type based on whether the state of the static feature is close to the natural standing state, and if the feature is static and close to the natural standing state, giving a weight value of 0 to the static feature. If the feature is still but the difference from the natural standing state is large, the still feature is given a weight. The specific method for determining the weight coefficient is as follows: assuming that the number of 0 weighted feature components is n, the current weight of the remaining feature components is 1/(18-n). Since the stationary portion feature is a secondary action portion, a weakening factor ξ is added to the weight of the stationary portion feature. And according to the difference value of the characteristic and the characteristic value in the natural state, performing weight increase or weight decrease on the basis of xi/(18-n) weight. Firstly, determining the number m of static weighted features, the total number of the weights of the static parts is ξ m/(18-n), carrying out increasing ordering according to differentiated numerical values, the feature weight corresponding to the middle value is ξ/(18-n) unchanged, then carrying out weight increase or weight decrease to two sides, and each weight wsComprises the following steps:
Figure BDA0001731534390000033
and in the step i, the number of the m-th feature and the adjacent number of the middle position feature are sorted in an increasing mode according to the differentiated numerical values, the plus sign is taken on the right side of the middle value, and the minus sign is taken on the left side of the middle value. If m is an even number, the two feature weights at the middle position are both xi/(18-n).
Weighting the characteristics of the motion part; the weighting of the moving part features is also according to the gradient allocation principle. But the ranking is based on the average interframe change rate of the current feature. If the number of the motion characteristics is p, the p is 18-n-m, the characteristics are subjected to ascending sorting according to the average frame-to-frame change rate, and the intermediate value is obtainedThe corresponding feature weight is
Figure BDA0001731534390000031
Then, the weights are increased or decreased to two sides, and the weights w of all the motion partsk
Figure BDA0001731534390000032
And i is the adjacent number of the p-th feature and the middle position feature after increasing and sorting according to the feature average interframe change rate, the plus sign is taken on the right side of the middle value, and the minus sign is taken on the left side. If p is even number, the two feature weights at the middle position are both
Figure BDA0001731534390000041
The weakening factor xi takes a value of 0.6-0.9 according to the number of the static weighted features and the intensity of the action. The final feature vector F is obtained as:
Figure BDA0001731534390000042
where fa denotes an a-th feature component, and wa denotes a weight of the a-th feature component.
(6) Motion evaluation based on motion descriptors.
Firstly, obtaining skeleton point data of a test motion based on kinect, constructing an action descriptor based on an action description method of a space triangular plane, and weighting the action descriptor based on a self-adaptive weighting method to obtain a final feature vector F; calculating the matching distance of the two sequences of R (i) and T (j), wherein R (i) is the final characteristic vector F sequence of the comparison action, and T (j) is the standard action characteristic vector sequence. Creating a three-dimensional vector [ q ] using the first and second differentials of R (i) and T (j) and their own valuesi,q'i,q”i]And [ c)j,c'j,c”j](ii) a Thus, the first order differential reflects the slope of the sequence to indicate the speed of motion, and the second order differential provides the concavity and convexity, the turning point; will differentiate the first orderThe independent variable of the Sigmoid function is multiplied by the sum of the original data and the second order differential, so that the distance D (R (i), T (j)) between the sequences is inversely related to the first order differential, namely when the movement speed is slow, D (R (i), T (j)) is larger, and the weight of important actions in the movement process is increased. Algorithm definition after combining differentiated three-dimensional vectors
D (R (i), T (j)), namely:
Figure BDA0001731534390000043
wherein the first order differential and the second order differential are respectively defined as:
Figure BDA0001731534390000044
qi″=qi+1+qi-1-2qi
finally, an M × N matrix A is constructed, wherein the value a (R (i), T (j)) of each element in the matrix represents D (R (i), T (j)) between the sequences corresponding to the position. And selecting an optimal path in the matrix based on an optimal path selection method, and accumulating each element value on the path to obtain an accumulated distance difference value based on a DTW matching algorithm. The cumulative distance difference is inversely related to the evaluation score.
The invention has the following beneficial effects:
1. the spatial triangular plane is selected to describe the motion characteristics of the human body, so that the human body actions of different postures at different moments can be accurately expressed. Meanwhile, the human body is structured on the basis of a spatial triangular plane constructed by the skeleton points, so that the motion characteristics are more reasonable.
2. And (3) highlighting the important action part and weakening the secondary part by adopting a weight coefficient determination method of self-adaptive weighting.
3. Compared with the traditional DTW algorithm, the method has the advantage that the accuracy of action matching is improved by adopting the improved DTW-based evaluation algorithm.
Drawings
FIG. 1 is a spatial trigonometric feature plane and a torso plane;
FIG. 2 is a spatial trigonometric feature plane normal vector;
FIG. 3 is a flow chart of adaptive weighting of feature quantities;
fig. 4 is a flow chart of motion description and motion estimation.
Detailed Description
(1) And constructing 9 characteristic triangular planes for personal body evaluation based on the human body joint points. The upper limb triangle plane S formed between the left and right upper limbs1、S2The upper trunk triangle plane S formed by the left and right elbow joints, shoulder joint and hip joint3、S4The crotch triangular plane S formed by the left knee joint, the right knee joint and the central point of the caudal vertebra5Lower limb triangular plane S formed by left and right ankle joints, knee joint and hip joint6、S7And a lower trunk triangular plane S formed by the left knee joint, the right knee joint, the left hip joint, the right hip joint and the left shoulder joint8、S9. As shown in fig. 1, a characteristic triangular plane and a torso characteristic plane are represented based on the human body joint points.
(2) The trunk part is a closed trapezoidal region formed by the two side shoulder joints and the two side hip joints, and the coordinates of the two side shoulder joints are recorded as Gsl(xsl,ysl,zsl) And Gsr(xsr,ysr,zsr) The coordinates of the hip joints on both sides are Ghl(xhl,yhl,zhl) And Ghr(xhr,yhr,zhr) The area of the torso part is obtained by a trapezoidal area formula:
Figure BDA0001731534390000051
calculating 9 relative areas of the 9 characteristic triangular planes and the human body trunk area:
Figure BDA0001731534390000052
wherein n is 1-9, and the relative area of the 9 characteristic planes is used as a part of the motion evaluation characteristic quantity.
(3) Let three vertex coordinates of one of the feature triangles be Gt1(xt1,yt1,zt1)、Gt2(xt2,yt2,zt2) And Gt3(xt3,yt3,zt3) Then the normal vector of the feature triangular plane
Figure BDA0001731534390000061
At an angle to the vertical of
Figure BDA0001731534390000062
Wherein:
a=(yt2-yt1)*(zt3-zt1)-(yt3-yt1)*(zt2-zt1)
b=(zt2-zt1)*(xt3-xt1)-(zt3-zt1)*(xt2-xt1)
c=(xt2-xt1)*(yt3-yt1)-(xt3-xt1)*(yt2-yt1)
and then respectively calculating the included angles between the 9 normal vectors of the 9 feature triangular planes and the vertical direction to be used as the motion evaluation feature quantity. As shown in FIG. 2, with S1The feature triangle plane is taken as an example to show the normal vector
Figure BDA0001731534390000063
The angle alpha to the vertical.
(4) Based on the relative areas of the 9 characteristic triangular surfaces and the included angles between the normals of the 9 characteristic triangular surfaces and the vertical direction, the 18 factors form a motion evaluation characteristic quantity to describe the motion posture of the human body in a single frame, and in order to enable the relative area characteristic component and the angle characteristic component to be in the same order of magnitude, the relative area value is multiplied by 100 to be used as a characteristic value. The mathematical representation of the motion estimation feature quantities is 18 f-dimensional vectors, where f is the number of frames of the described motion.
(5) The main part and the secondary part in the human body movement are mainly embodied by two modes, wherein one mode is that the joint point with higher movement speed is relative to the joint point with lower movement speed or the static joint point is taken as the main action part; the other is a stationary joint point, but the joint point whose position in the stationary process is different from the natural state of the human body by a large amount is the main action part. Based on the above, the invention provides a self-adaptive weighting method based on human body local action difference, which is mainly divided into two aspects
First the weighting of the stationary part features and then the weighting of the moving part features. Fig. 3 is a flowchart of adaptively weighting the motion estimation feature amounts.
Weighting of stationary part features. The method is divided into two cases, namely a static 0 weight characteristic and a static weighted characteristic. And judging the feature type based on whether the state of the static feature is close to the natural standing state, and if the feature is static and close to the natural standing state, giving a weight value of 0 to the static feature. If the feature is still but the difference from the natural standing state is large, the still feature is given a weight. The specific method for determining the weight coefficient is as follows: assuming that the number of 0 weighted feature components is n, the current weight of the remaining feature components is 1/(18-n). Since the stationary weighted feature is a secondary action part, a weakening factor ξ is added to the weight. And according to the difference degree of the characteristic and the characteristic in the natural state (namely the difference value of the characteristic and the characteristic value in the natural state), performing weight increase or weight decrease on the basis of the xi/(18-n) weight. Firstly, determining the number m of static weighted features, the total number of the weights of the static parts is ξ m/(18-n), carrying out increasing ordering according to differentiated numerical values, the feature weight corresponding to the middle value is ξ/(18-n) unchanged, then carrying out weight increase or weight decrease to two sides, and each weight wsComprises the following steps:
Figure BDA0001731534390000071
and i is the adjacent number of the mth characteristic and the middle position characteristic after the mth characteristic is sorted in an increasing mode according to the differentiated numerical values, the plus sign is taken from the right side characteristic of the middle value, and the minus sign is taken from the left side characteristic. If m is an even number, the two feature weights at the middle position are both xi/(18-n).
Moving partAnd (4) weighting the features. The weighting of the moving part features is also according to the gradient allocation principle. But the ranking is based on the average interframe change rate of the current feature. If the number of the motion features is p, the p is 18-n-m, after the features are subjected to ascending sorting according to the average frame-to-frame change rate, the feature weight value corresponding to the intermediate value is
Figure BDA0001731534390000072
Then, the weights are increased or decreased to two sides, and the weights w of all the motion partsk
Figure BDA0001731534390000073
And i is the adjacent number of the p-th feature and the middle position feature after increasing and sorting according to the feature average interframe change rate, the plus sign is taken on the right side of the middle value, and the minus sign is taken on the left side. If p is even number, the two feature weights at the middle position are both
Figure BDA0001731534390000074
The weakening factor xi is generally 0.6-0.9 according to the number of the static weighted features and the intensity of the action. The final feature vector F is obtained as:
Figure BDA0001731534390000075
where fa denotes an a-th feature component, and wa denotes a weight of the a-th feature component.
(6) The step of human motion evaluation is to compare the test motion data to be evaluated with the standard motion data and calculate the difference between the two data based on the improved DTW algorithm to determine the quality of the motion. The process is mainly divided into two aspects: firstly, an improved DTW algorithm is provided on the basis of the traditional DTW algorithm. And secondly, comparing the test action with the standard action based on an improved DTW evaluation algorithm to obtain an evaluation result.
The invention provides an improved DTW algorithm. Although the traditional DTW algorithm is already inMany fields of application are available, but in the evaluation of actual taijiquan sports, the final relative static posture of each move is more important than other sports. However, the conventional DTW algorithm only considers the euclidean distances of two sequences when calculating the feature distance difference. The euclidean distance does not reflect the speed and shape information of the sequence transformation. The importance of the feature sequences when each move of the taijiquan is relatively static cannot be highlighted in the practical evaluation. Conventional DTW algorithms aim to find a matching distance that minimizes the cumulative distortion between two sequences. In order to find out the matching path of two sequences visually, an n × m matrix a is required to be constructed, the value a (r (i), t (j)) of each element in the matrix represents the distance D (r (i), t (j)) between the sequences respectively corresponding to the position, and the distance generally adopts the euclidean distance, i.e. D2(R(i),T(j))=(R(i)-T(j))2Each matrix element (r (i), t (j)) represents the alignment of point r (i) and point t (j). In order to solve the problem that the traditional DTW algorithm does not fully represent the motion characteristic sequence information in the actual evaluation, the invention adds first-order differential and second-order differential characteristics on the basis of original data. First, a three-dimensional vector [ q ] is created using the first and second differentials of R (i) and T (j) and its own valuesi,q'i,q”i]And [ c)j,c'j,c”j]. Thus, the slope of the sequence as reflected by the first order differential may indicate the velocity of the motion, and the second order differential may provide key information of the sequence such as the relief, inflection points, etc. Meanwhile, in order to highlight the importance of each move type relative static posture in the motion of the Taiji boxing, the first order differential is taken as an argument of a Sigmoid function, and then the argument is multiplied by the sum of the original data and the second order differential, so that D (R (i), T (j)) and the first order differential are negatively correlated, namely when the motion speed is slow, D (R (i), T (j)) is large, and the weight of important actions in the motion process of the Taiji boxing is increased. After the algorithm proposes the three-dimensional vector combined with the differential, D (R (i), T (j)) is redefined, namely:
Figure BDA0001731534390000081
wherein the first order differential and the second order differential are respectively defined as:
Figure BDA0001731534390000082
qi″=qi+1+qi-1-2qi
the method comprises the following specific steps of evaluating the motion based on the improved DTW algorithm: firstly, obtaining skeleton point data of a test motion based on kinect, constructing motion evaluation characteristic quantity based on the motion description method based on the space triangular plane, and then weighting the motion evaluation characteristic quantity based on the self-adaptive weighting method. And finally, based on the weighted motion evaluation characteristic quantity, calculating the matching degree of the test action and the standard action by using an improved DTW algorithm. The resulting degree of match reflects the quality of the test action relative to the standard action, i.e. the final score. Fig. 4 is a flow chart of action description and exercise evaluation.

Claims (1)

1. A motion description and evaluation method based on space triangular plane features specifically comprises the following steps:
(1) firstly, dividing a human body into three parts, namely an upper limb, a lower limb and a trunk, and constructing 9 characteristic triangular planes representing human body movement based on joint points of the human body; wherein the 9 characteristic triangular planes comprise an upper limb triangular plane S formed between the left and right upper limbs1、S2The upper trunk triangle plane S formed by the left and right elbow joints, shoulder joint and hip joint3、S4The crotch triangular plane S formed by the left knee joint, the right knee joint and the central point of the caudal vertebra5Lower limb triangular plane S formed by left and right ankle joints, knee joint and hip joint6、S7And a lower trunk triangular plane S formed by the left knee joint, the right knee joint, the left hip joint, the right hip joint and the left shoulder joint8、S9
(2) Human body trunk plane S formed by two side shoulder joints and two side hip joints10Respectively calculating the relative areas of the 9 characteristic triangular surfaces and the trunk plane as motion evaluation characteristic quantities;
(3) calculating included angles between normal vectors of the 9 characteristic planes and the vertical direction, and taking the included angles as motion evaluation characteristic quantities;
(4) constructing 18 human body motion descriptors of f-dimensional vectors of the testee based on the relative areas of the 9 characteristic triangular surfaces and the trunk plane and the included angles between the 9 normal vectors and the vertical direction, wherein f is the frame number of the motion;
(5) weighting the stationary part features and the moving part features:
weighting of stationary part features; the method is divided into two cases, one is the characteristic of static 0 weight, and the other is the characteristic of static weight; judging the feature type based on whether the state of the static feature is close to the natural standing state, and if the feature is static and close to the natural standing state, giving a weight value of 0 to the static feature; if the feature is static but the difference from the natural standing state is larger, giving a weight to the static feature; the specific method for determining the weight coefficient is as follows: assuming that the number of the 0 weight feature components is n, the current weight of the remaining feature components is 1/(18-n); the static part feature is a secondary action part, so a weakening factor xi is added to the weight of the static part feature; according to the difference value of the characteristic and the characteristic value in the natural state, carrying out weight increase or weight decrease on the basis of xi/(18-n) weight; firstly, determining the number m of static weighted features, the total number of the weights of the static parts is ξ m/(18-n), carrying out increasing ordering according to differentiated numerical values, the feature weight corresponding to the middle value is ξ/(18-n) unchanged, then carrying out weight increase or weight decrease to two sides, and each weight wsComprises the following steps:
Figure FDA0001731534380000021
in the method, i is the adjacent number of the mth feature and the middle position feature after the mth feature and the middle position feature are sorted in an increasing mode according to the difference numerical values, the plus sign is taken on the right side of the middle value, and the minus sign is taken on the left side of the middle value; if m is an even number, the two feature weights at the middle position are both xi/(18-n);
weighting the characteristics of the motion part; the weighting of the characteristics of the motion part is also according to the gradient distribution principle; but the sorting basis is the average interframe change rate of the current characteristics; is provided withIf the number of the motion features is p, the p is 18-n-m, after the features are subjected to increasing sorting according to the average frame-to-frame change rate, the feature weight value corresponding to the intermediate value is
Figure FDA0001731534380000022
Then, the weights are increased or decreased to two sides, and the weights w of all the motion partsk
Figure FDA0001731534380000023
Wherein, i is the adjacent number of the p-th feature and the middle position feature after the increasing ordering according to the feature average interframe change rate, the right side feature of the middle value takes a plus sign, and the left side takes a minus sign; if p is even number, the two feature weights at the middle position are both
Figure FDA0001731534380000024
The weakening factor xi takes a value of 0.6-0.9 according to the number of the static weighted features and the intensity degree of the action; the final feature vector F is obtained as:
Figure FDA0001731534380000025
wherein fa represents the a-th feature component, wa represents the weight of the a-th feature component;
(6) motion evaluation based on motion descriptors
Firstly, obtaining skeleton point data of a test motion based on kinect, constructing an action descriptor based on an action description method of a space triangular plane, and weighting the action descriptor based on a self-adaptive weighting method to obtain a final feature vector F; calculating the matching distance of two sequences of R (i) and T (j), wherein R (i) is the final characteristic vector F sequence of the comparison action, and T (j) is the standard action characteristic vector sequence; creating a three-dimensional vector [ q ] using the first and second differentials of R (i) and T (j) and their own valuesi,q'i,q”i]And [ c)j,c'j,c”j](ii) a Thus, the first order differential reflects the slope of the sequence to indicate the speed of motion, and the second order differential provides the concavity and convexity, the turning point; taking the first order differential as an argument of a Sigmoid function, and multiplying the argument by the sum of the original data and the second order differential to negatively correlate the distance D (R (i), T (j)) between the sequences with the first order differential, wherein the algorithm defines D (R (i), T (j)) after combining three-dimensional vectors of the differentials, namely:
Figure FDA0001731534380000031
wherein the first order differential and the second order differential are respectively defined as:
Figure FDA0001731534380000032
qi″=qi+1+qi-1-2qi
finally, an M multiplied by N matrix A is constructed, wherein the value a (R (i), T (j)) of each element in the matrix represents D (R (i), T (j)) between sequences corresponding to the position; selecting an optimal path in the matrix based on an optimal path selection method, and accumulating each element value on the path to obtain an accumulated distance difference value based on a DTW matching algorithm; the cumulative distance difference is inversely related to the evaluation score.
CN201810776306.4A 2018-07-16 2018-07-16 Motion description and evaluation method based on space triangular plane features Active CN108985227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810776306.4A CN108985227B (en) 2018-07-16 2018-07-16 Motion description and evaluation method based on space triangular plane features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810776306.4A CN108985227B (en) 2018-07-16 2018-07-16 Motion description and evaluation method based on space triangular plane features

Publications (2)

Publication Number Publication Date
CN108985227A CN108985227A (en) 2018-12-11
CN108985227B true CN108985227B (en) 2021-06-11

Family

ID=64548377

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810776306.4A Active CN108985227B (en) 2018-07-16 2018-07-16 Motion description and evaluation method based on space triangular plane features

Country Status (1)

Country Link
CN (1) CN108985227B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210284A (en) * 2019-04-12 2019-09-06 哈工大机器人义乌人工智能研究院 A kind of human body attitude behavior intelligent Evaluation method
CN110175629B (en) * 2019-04-25 2023-05-23 上海师范大学 Human body action similarity calculation method and device
CN110354480B (en) * 2019-07-26 2021-04-16 南京邮电大学 Golf swing action score estimation method based on posture comparison
JP7413836B2 (en) * 2020-02-28 2024-01-16 富士通株式会社 Behavior recognition method, behavior recognition program, and behavior recognition device
CN112364770B (en) * 2020-11-11 2023-02-17 天津大学 Commercial Wi-Fi-based human activity recognition and action quality evaluation method

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160141023A (en) * 2015-05-27 2016-12-08 주식회사 에보시스 The method of dynamic and static gesture recognition using depth camera and interface of immersive media contents
CN107423729A (en) * 2017-09-20 2017-12-01 湖南师范大学 A kind of remote class brain three-dimensional gait identifying system and implementation method towards under complicated visual scene

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20160141023A (en) * 2015-05-27 2016-12-08 주식회사 에보시스 The method of dynamic and static gesture recognition using depth camera and interface of immersive media contents
CN107423729A (en) * 2017-09-20 2017-12-01 湖南师范大学 A kind of remote class brain three-dimensional gait identifying system and implementation method towards under complicated visual scene

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Gesture segmentation based on a two-phase estimation of;Ke Liu,Dunwei Gong;《Information Sciences》;20170213;第88-105页 *
Real-time Action Recognition with Enhanced Motion Vector CNNs;Bowen Zhang,Limin Wang;《ResearchGate》;20160426;第1-9页 *
人体动作识别中基于HTM架构的时空特征提取方法;王向前、孙挺;《计算机应用研究》;20171215;第3899-3903页 *

Also Published As

Publication number Publication date
CN108985227A (en) 2018-12-11

Similar Documents

Publication Publication Date Title
CN108985227B (en) Motion description and evaluation method based on space triangular plane features
CN104115192B (en) Three-dimensional closely interactive improvement or associated improvement
Deutscher et al. Articulated body motion capture by annealed particle filtering
Martínez-González et al. Efficient convolutional neural networks for depth-based multi-person pose estimation
Gupta et al. Context and observation driven latent variable model for human pose estimation
Monir et al. Rotation and scale invariant posture recognition using Microsoft Kinect skeletal tracking feature
CN105184767A (en) Moving human body attitude similarity measuring method
Ko et al. CNN and bi-LSTM based 3D golf swing analysis by frontal swing sequence images
Yu et al. A robust fall detection system for the elderly in a smart room
Nguyen et al. Estimating skeleton-based gait abnormality index by sparse deep auto-encoder
Sheu et al. Improvement of human pose estimation and processing with the intensive feature consistency network
CN112149531A (en) Human skeleton data modeling method in behavior recognition
Sharifi et al. Marker-based human pose tracking using adaptive annealed particle swarm optimization with search space partitioning
CN113240044B (en) Human skeleton data fusion evaluation method based on multiple Kinects
Zhang et al. Motion trajectory tracking of athletes with improved depth information-based KCF tracking method
Zhao et al. [Retracted] Recognition of Volleyball Player’s Arm Motion Trajectory and Muscle Injury Mechanism Analysis Based upon Neural Network Model
Raskin et al. Dimensionality reduction for articulated body tracking
Ahmed et al. Adaptive pooling of the most relevant spatio-temporal features for action recognition
Sun et al. 3D hand tracking with head mounted gaze-directed camera
Raskin et al. 3D Human Body-Part Tracking and Action Classification Using A Hierarchical Body Model.
Fossati et al. Observable subspaces for 3D human motion recovery
Li et al. Characteristic Behavior of Human Multi-Joint Spatial Trajectory in Slalom Skiing
Zhou et al. Motion balance ability detection based on video analysis in virtual reality environment
Yin et al. Multi-scale primal feature based facial expression modeling and identification
CN115205983B (en) Cross-perspective gait recognition method, system and equipment based on multi-feature aggregation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
EE01 Entry into force of recordation of patent licensing contract

Application publication date: 20181211

Assignee: Ruixukang (Hangzhou) Intelligent Technology Co.,Ltd.

Assignor: HANGZHOU DIANZI University

Contract record no.: X2022330000044

Denomination of invention: An action description and evaluation method based on spatial triangular plane features

Granted publication date: 20210611

License type: Common License

Record date: 20220218

EE01 Entry into force of recordation of patent licensing contract