CN111144217A - Motion evaluation method based on human body three-dimensional joint point detection - Google Patents

Motion evaluation method based on human body three-dimensional joint point detection Download PDF

Info

Publication number
CN111144217A
CN111144217A CN201911193095.2A CN201911193095A CN111144217A CN 111144217 A CN111144217 A CN 111144217A CN 201911193095 A CN201911193095 A CN 201911193095A CN 111144217 A CN111144217 A CN 111144217A
Authority
CN
China
Prior art keywords
video
motion
similarity
vector
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911193095.2A
Other languages
Chinese (zh)
Other versions
CN111144217B (en
Inventor
许国良
李轶玮
李万林
文韬
雒江涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University of Post and Telecommunications
Original Assignee
Chongqing University of Post and Telecommunications
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University of Post and Telecommunications filed Critical Chongqing University of Post and Telecommunications
Priority to CN201911193095.2A priority Critical patent/CN111144217B/en
Publication of CN111144217A publication Critical patent/CN111144217A/en
Application granted granted Critical
Publication of CN111144217B publication Critical patent/CN111144217B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Abstract

The invention relates to a motion evaluation method based on human body three-dimensional joint point detection, which belongs to the field of computer vision and comprises the following steps: s1: detecting three-dimensional joint points of a human body on a single-frame picture after video framing; s2: extracting key frames of a video appointed frame number; s3: constructing motion vector characteristics and joint kinetic energy characteristics, and extracting characteristic values; s4: constructing a key frame action similarity contrast model by multi-feature fusion: combining the sub-features in the step S3, and constructing personalized models aiming at different types of actions; constructing a motion vector feature similarity function based on the cosine similarity, and constructing a joint kinetic energy similarity function based on the weighting function; and obtaining a key frame action similarity comparison model based on the two similarity functions, comparing the action to be detected with the key frame set of the standard action, and finally obtaining the action similarity of the motion video. The method is more accurate and scientific and can be used for correcting and teaching physical fitness actions.

Description

Motion evaluation method based on human body three-dimensional joint point detection
Technical Field
The invention belongs to the technical field of computer vision, and particularly relates to a motion evaluation method based on human body three-dimensional joint point detection.
Background
With the progress of artificial intelligence algorithm and computer image processing performance, pose evaluation and behavior understanding of targets in videos become hot problems in the field of computer vision. And has been applied in many fields such as sports training aid, behavior abnormality detection, gesture and gait recognition, and the like.
The human body posture assessment can be widely applied to various sports, human body action recognition is utilized in sports teaching and fitness teaching, and actions are captured and analyzed, so that a personalized technical diagnosis report is obtained, an auxiliary training tool is provided for athletes and coaches, and the level of sporters is improved.
The standard degree of action in sports training determines the quality of training effect, and the current sports and fitness training usually depends on the observation and experience of coaches, so as to carry out technical guidance on athletes. Athletes also lack intuitive feedback, reducing training efficiency.
At present, the intelligent product of sports analysis adopts sensor + APP mode mostly, like ZEEP tennis intelligence tracking analysis inductor, installs in the racket bottom, can take notes batting speed, position etc. and data passes through APP real-time feedback. The same type of products also comprise a ZEPP golf swing analyzer, a cool wave and small feather badminton sensor and the like. In addition to sensors, there are also motion analysis products that utilize high speed cameras, such as the SAP Smart basketball coach, with the help of high speed cameras and powerful computing platforms, to analyze the posture of the shooter, such as take-off height, angle, etc. The device is expensive, complex to operate and not suitable for general sporters. With the improvement of the capability of computer image and video processing and the rapid development of a deep learning algorithm, the analysis of the human motion posture through the video becomes possible.
However, analyzing and evaluating the movement of the target person using sports videos still lacks an effective solution. One of the difficulties is that the two-dimensional human posture assessment is easily influenced by a shelter, and the accuracy of the assessment of unconventional actions such as joint staggering and the like is low; the second difficulty is that the individuals have physical differences, are fat, thin and tall, and the accuracy rate of evaluating the action by directly calculating the Euclidean distance of the joint points is low; the third difficulty is that different people can do the same action quickly or slowly, and cannot perform the comparison analysis frame by frame.
In summary, at present, there is no mature algorithm and product for judging the action standard of the motion video.
Disclosure of Invention
In view of the above, the present invention is directed to a motion evaluation method based on human body three-dimensional joint point detection, which solves the problem of motion standard evaluation.
In order to achieve the purpose, the invention provides the following technical scheme:
a motion evaluation method based on human body three-dimensional joint point detection comprises the following steps:
s1: detecting three-dimensional joint points of the human body: detecting three-dimensional joint points of a human body on a single-frame picture after video framing;
s2: extracting key frames: extracting key frames with specified frame numbers of the video to realize time alignment of the video to be detected and the standard video;
s3: constructing and extracting features based on joint points: constructing two types of sub-features and extracting feature values, comprising the following steps:
constructing motion vector characteristics: considering that the action postures of the human body comprise head movement, limb movement and chest and waist movement, selecting limbs capable of expressing movement information to form a movement vector characteristic;
constructing joint kinetic energy characteristics: calculating the kinetic energy of each joint point in each frame according to the change amplitude of the coordinates in two adjacent frames of the video;
s4: constructing a key frame action similarity contrast model by multi-feature fusion: combining the sub-features in the step S3, and constructing personalized models aiming at different types of actions;
constructing a motion vector feature similarity function based on the cosine similarity, and constructing a joint kinetic energy similarity function based on the weighting function;
and obtaining a key frame action similarity comparison model based on the motion vector feature similarity function and the joint kinetic energy similarity function, and comparing the action to be detected with a key frame set of standard actions to finally obtain the action similarity of the motion video.
Further, in step S1, the three-dimensional joint point coordinates of the human body in the video are obtained using a three-dimensional joint point detection network based on a deep learning algorithm.
Further, in step S1, inputting a video, implementing 3D human body posture estimation from 2D joint point trajectories by using a time-space domain convolution algorithm, and outputting three-dimensional joint point coordinate information;
among the three-dimensional joint point coordinate information, Loci,tAnd (x, y and z) represents the coordinate position of the human skeletal joint point numbered i in the t frame, and comprises 17 skeletal joint points of the top of the head, the nose, the neck, the shoulder, the right shoulder, the left elbow, the right elbow, the left wrist, the right wrist, the waist, the middle hip, the left hip, the right hip, the left knee, the right knee, the left ankle and the right ankle.
Further, in step S2, extracting the key frames based on a clustering algorithm, clustering the three-dimensional joint coordinates, selecting k clustering centers, calculating the distance between each frame and the joint coordinates corresponding to the clustering centers, selecting the frame closest to the clustering centers as key frames, and summing up k key frames; and sequencing the key frames according to the time index to obtain a key frame set of the video.
Further, in the step S2, joint coordinates Loc of the t-th frame are calculatedi,tThe joint coordinates Loc corresponding to the frame where the k-th clustering center is locatedi,kHas a Euclidean distance O betweenk,tThe calculation formula is
Figure BDA0002294066880000031
Wherein n represents the number of joint points; for a standard motion sequence X ═ X1,X2,X3,...XNAnd the motion sequence Y to be detected is { Y ═ Y }1,Y2,Y3,...YMAnd (4) selecting a frame closest to the clustering center as a key frame, wherein N and M are action sequence lengths, and taking the key frame as a key frameThe frames are sorted according to the time index to obtain a key frame set { f) of the video to be detected1,f2,…,fk},fiE.g. X, and the set of keyframes { f ] of the standard video1',f2',…,fk'},fi'∈Y。
Further, in step S3, constructing a motion vector feature, and selecting a limb component capable of representing motion information, where the limb component includes 9 vectors, i.e., a neck vector, a left upper arm vector, a left lower arm vector, a right upper arm vector, a right lower arm vector, a left upper leg vector, a left lower leg vector, a right upper leg vector, and a right lower leg vector; the motion plane normal vector feature is composed of the vector product, and comprises 6 vector features of a left arm normal vector, a right arm normal vector, a left leg normal vector, a right leg normal vector, a chest normal vector and a hip normal vector; a total of 15 motion vector features.
Further, in step S3, the joint kinetic energy characteristic calculation formula is
Figure BDA0002294066880000032
The formula represents the kinetic energy characteristic value corresponding to the ith joint point in the t frame, wherein c is a kinetic energy parameter, and delta t is a frame difference and represents the change condition of the kinetic energy between two frames; loci,tAnd the coordinate position of the human skeletal joint point numbered i in the t frame.
Further, in step S4, the features extracted in step S3 are used to construct a key frame motion similarity comparison model by fusing multiple features, and a motion vector feature similarity function is proposed:
Figure BDA0002294066880000033
the expression represents the similarity of the motion vector characteristics of the video to be detected and the standard video in the t frames, wherein Vi,tIs the ith motion vector characteristic value V 'of the video to be detected'i,tThe motion vector is the ith motion vector characteristic value of the standard video, and n is the number of the motion vector characteristics;
and (3) providing a joint kinetic energy similarity function:
Figure BDA0002294066880000034
the formula represents the joint kinetic energy characteristic similarity of the video to be detected and the standard video in the t frame, Ej,tIs the j motion vector characteristic value, E 'of the video to be detected'j,tGiving different weights w to j joint kinetic energy characteristic values of the standard video and m to the number of joint kinetic energy characteristics according to different action types;
constructing a key frame action similarity contrast model based on the similarity function:
d(ft,ft')=SVt+SEt
the above formula represents the key frame f of the video to be detectedtAnd standard video key frame ft' attitude similarity;
and finally, obtaining an action sequence similarity evaluation function based on the similarity of the key frames:
Figure BDA0002294066880000041
and D (X, Y) is the motion difference distance between the video X to be detected and the standard video Y, the smaller the value is, the more similar the motion is, and k is the number of key frames.
The invention has the beneficial effects that: compared with the existing motion evaluation method based on two-dimensional joint points, the method has the advantages that the advantages of a deep convolution neural network and a time-space domain convolution algorithm are fully utilized, the two-dimensional coordinate track is converted into a three-dimensional coordinate, and the motion condition of a human body can be evaluated more accurately; the invention integrates multiple characteristics to establish an evaluation model, the motion vector characteristics represent the relative position and angle information of the joint, the joint kinetic energy characteristics represent the motion change amplitude and frequency information, and more comprehensive action standard evaluation can be given; the invention adopts a key frame extraction method based on clustering to carry out time alignment, and solves the problem of unequal action time.
Additional advantages, objects, and features of the invention will be set forth in part in the description which follows and in part will become apparent to those having ordinary skill in the art upon examination of the following or may be learned from practice of the invention. The objectives and other advantages of the invention may be realized and attained by the means of the instrumentalities and combinations particularly pointed out hereinafter.
Drawings
For the purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made to the following detailed description taken in conjunction with the accompanying drawings in which:
FIG. 1 is a schematic flow chart of a motion evaluation method based on human body three-dimensional joint point detection according to the present invention;
FIG. 2 is a schematic diagram of the detection of three-dimensional joint points of a human body according to the present invention;
FIG. 3 is a schematic diagram of 17 human skeletal joints according to the present invention;
FIG. 4 is a schematic diagram of key frame extraction provided by the present invention;
FIG. 5 is a diagram illustrating the motion vector characteristics provided by the present invention.
Detailed Description
The embodiments of the present invention are described below with reference to specific embodiments, and other advantages and effects of the present invention will be easily understood by those skilled in the art from the disclosure of the present specification. The invention is capable of other and different embodiments and of being practiced or of being carried out in various ways, and its several details are capable of modification in various respects, all without departing from the spirit and scope of the present invention. It should be noted that the drawings provided in the following embodiments are only for illustrating the basic idea of the present invention in a schematic way, and the features in the following embodiments and examples may be combined with each other without conflict.
Wherein the showings are for the purpose of illustrating the invention only and not for the purpose of limiting the same, and in which there is shown by way of illustration only and not in the drawings in which there is no intention to limit the invention thereto; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if there is an orientation or positional relationship indicated by terms such as "upper", "lower", "left", "right", "front", "rear", etc., based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not an indication or suggestion that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes, and are not to be construed as limiting the present invention, and the specific meaning of the terms may be understood by those skilled in the art according to specific situations.
As shown in fig. 1, the present invention provides a motion evaluation method based on human body joint point detection, comprising the following steps:
step S1: detecting human body joint points: and carrying out joint point detection on the single-frame picture after the video is framed.
Step S11: the two-dimensional position coordinates of the human skeleton are directly obtained from the images by means of a human joint point detection network, and then the two-dimensional coordinates are converted into three-dimensional coordinates by utilizing a time-space domain convolution algorithm according to the coordinate information of the images of adjacent frames.
Detected three-dimensional coordinates, Loci,tAnd (x, y and z) represents the coordinate position of the human skeletal joint point numbered i in the t frame, and comprises 17 skeletal joint points of the top of the head, the nose, the neck, the shoulder, the right shoulder, the left elbow, the right elbow, the left wrist, the right wrist, the waist, the middle hip, the left hip, the right hip, the left knee, the right knee, the left ankle and the right ankle.
Fig. 2 shows an example of the detection of three-dimensional joint points of a human body.
Step S12: storing the coordinate information of the articulated point, Loci,tAnd (x, y, z) represents the coordinate position of the human bone joint point numbered i in the t frame.
As shown in fig. 3, 17 human skeletal joint points are shown.
Step S2: extracting video key frames based on a joint point coordinate clustering algorithm: and extracting a specified number of key frames by using a k-means clustering algorithm.
Step S21: from a set of frames XNSelecting K frames as a clustering center, and expressing as C ═ C1,c2,...,ck}。
Step S22: calculating Euclidean distance between each frame and corresponding joint of cluster center
Figure BDA0002294066880000051
And classifying the cluster into the cluster with the minimum distance to the cluster center. In the above formula, Loci,kAnd the joint coordinates of the frame where the kth cluster center is located are shown, n is the number of joint points, and k is the number of cluster centers. For a standard motion sequence X ═ X1,X2,X3,...XNAnd the motion sequence Y to be detected is { Y ═ Y }1,Y2,Y3,...YMAnd (5) selecting a frame closest to the cluster center as a key frame, and sequencing the key frames according to the time index to obtain a key frame set { f) of the video to be detected, wherein N and M are the action sequence lengths1,f2,…,fk},fiE.g. X, and the set of keyframes { f ] of the standard video1',f2',…,fk'},fi'∈Y。
Step S23: for each class ciRecalculating its cluster center
Figure BDA0002294066880000061
This process is repeated until the cluster center position does not change.
Step S24: selecting a frame nearest to the clustering center as a key frame, sequencing the key frames according to time indexes, and regarding a standard action sequence X ═ X1,X2,X3,...XNAnd a sequence of actions to be detected Y ═ Y } Y1,Y2,Y3,...YMObtaining a key frame set { f) of the video to be detected, wherein N and M are action sequence lengths1,f2,…,fk},fiE.g. X, and the set of keyframes { f ] of the standard video1',f2',…,fk'},fi'∈Y。
Referring to fig. 4, a diagram illustrating key frame extraction is shown.
Step S3: constructing and extracting features based on joint points: and constructing subclass characteristics, and extracting the characteristics of the to-be-detected video and the standard video.
Step S31: extracting the motion vector characteristics: considering that the action posture of the human body includes head movement, limb movement and chest and waist movement, 15 motion vector features as shown in the following table are selected.
Figure BDA0002294066880000062
Referring to fig. 5, a picture shows planar and vector features in three-dimensional space.
Step S32: extracting joint kinetic energy features:
Figure BDA0002294066880000071
Ei,tthe kinetic energy of the ith joint point in the t frame is obtained; loci,tThree-dimensional coordinates of the ith joint point; c is a kinetic energy parameter, and the value is different according to different motions; Δ t is the time interval between two adjacent frames.
Step S4: and (5) fusing the sub-features to construct a key frame action similarity contrast model.
Step S41: constructing a motion vector feature similarity function based on the cosine similarity, wherein the formula is as follows:
Figure BDA0002294066880000072
Vi,t、V′i,trespectively representing the motion vector characteristic values in t frames of the video to be detected and the standard video. n is 15, and the total number is 15 characteristic values.
Step S42: constructing a joint kinetic energy feature similarity function based on the weighting function, wherein the formula is as follows:
Figure BDA0002294066880000073
Ei,t、E′i,tand respectively representing joint kinetic energy characteristic values in t frames of the video to be detected and the standard video. And m is 17, and the total number of the characteristic values is 17. Endowing the feature vector with different weights w according to different action types, wherein the weight value range is [0, 1%]。
Step S43: fusing the similarity function to obtain a key frame similarity contrast function d (f)t,ft')=SVt+SEtCalculating the distance d (f) of the action in the key frame of the video to be detectedt,ft')。
Step S44: evaluating the action sequences based on the key frame similarity model, wherein the similarity of the two action sequences can be obtained by the following formula:
Figure BDA0002294066880000074
and calculating according to the formula to obtain the value of D (X, Y), namely the motion difference distance of the motion flow sequence X, Y. Smaller distances indicate more similar motion.
Finally, the above embodiments are only intended to illustrate the technical solutions of the present invention and not to limit the present invention, and although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that modifications or equivalent substitutions may be made on the technical solutions of the present invention without departing from the spirit and scope of the technical solutions, and all of them should be covered by the claims of the present invention.

Claims (8)

1. A motion evaluation method based on human body three-dimensional joint point detection is characterized by comprising the following steps: the method comprises the following steps:
s1: detecting three-dimensional joint points of the human body: detecting three-dimensional joint points of a human body on a single-frame picture after video framing;
s2: extracting key frames: extracting key frames with specified frame numbers of the video to realize time alignment of the video to be detected and the standard video;
s3: constructing and extracting features based on joint points: constructing two types of sub-features and extracting feature values, comprising the following steps:
constructing motion vector characteristics: considering that the action postures of the human body comprise head movement, limb movement and chest and waist movement, selecting limbs capable of expressing movement information to form a movement vector characteristic;
constructing joint kinetic energy characteristics: calculating the kinetic energy of each joint point in each frame according to the change amplitude of the coordinates in two adjacent frames of the video;
s4: constructing a key frame action similarity contrast model by multi-feature fusion: combining the sub-features in the step S3, and constructing personalized models aiming at different types of actions; constructing a motion vector feature similarity function based on the cosine similarity, and constructing a joint kinetic energy similarity function based on the weighting function; and obtaining a key frame action similarity comparison model based on the motion vector feature similarity function and the joint kinetic energy similarity function, and comparing the action to be detected with a key frame set of standard actions to finally obtain the action similarity of the motion video.
2. The motion evaluation method based on human body three-dimensional joint point detection according to claim 1, characterized in that: in step S1, the three-dimensional joint point detection network based on the deep learning algorithm is used to obtain the coordinates of the three-dimensional joint point of the human body in the video.
3. The motion evaluation method based on human body three-dimensional joint point detection according to claim 2, characterized in that: in the step S1, inputting a video, realizing 3D human body posture estimation from a 2D joint point track by using a time-space domain convolution algorithm, and outputting three-dimensional joint point coordinate information;
among the three-dimensional joint point coordinate information, Loci,tAnd (x, y and z) represents the coordinate position of the human skeletal joint point numbered i in the t frame, and comprises 17 skeletal joint points of the top of the head, the nose, the neck, the shoulder, the right shoulder, the left elbow, the right elbow, the left wrist, the right wrist, the waist, the middle hip, the left hip, the right hip, the left knee, the right knee, the left ankle and the right ankle.
4. The motion evaluation method based on human body three-dimensional joint point detection according to claim 1, characterized in that: in step S2, extracting the key frames based on a clustering algorithm, clustering the three-dimensional joint coordinates, selecting k clustering centers, calculating the distance between each frame and the joint coordinates corresponding to the clustering centers, and selecting the frame closest to the clustering centers as key frames, wherein the total number of the k key frames is k; and sequencing the key frames according to the time index to obtain a key frame set of the video.
5. The motion evaluation method based on human body three-dimensional joint point detection according to claim 4, wherein: in step S2, the joint coordinates Loc of the t-th frame are calculatedi,tThe joint coordinates Loc corresponding to the frame where the k-th clustering center is locatedi,kHas a Euclidean distance O betweenk,tThe calculation formula is
Figure FDA0002294066870000021
Wherein n represents the number of joint points; for a standard motion sequence X ═ X1,X2,X3,...XNAnd the motion sequence Y to be detected is { Y ═ Y }1,Y2,Y3,...YMAnd (5) selecting a frame closest to the clustering center as a key frame, and sequencing the key frames according to time indexes to obtain a key frame set { f of the video to be detected, wherein N and M are action sequence lengths1,f2,…,fk},fiE.g. X, and the set of keyframes { f 'of the standard video'1,f′2,…,f′k},f′i∈Y。
6. The motion evaluation method based on human body three-dimensional joint point detection according to claim 1, characterized in that: in step S3, constructing a motion vector feature, and selecting a limb component capable of representing motion information, including 9 vectors, including a neck vector, a left upper arm vector, a left lower arm vector, a right upper arm vector, a right lower arm vector, a left thigh vector, a left lower leg vector, a right upper leg vector, and a right lower leg vector; the motion plane normal vector feature is composed of the vector product, and comprises 6 vector features of a left arm normal vector, a right arm normal vector, a left leg normal vector, a right leg normal vector, a chest normal vector and a hip normal vector; a total of 15 motion vector features.
7. The motion evaluation method based on human body three-dimensional joint point detection according to claim 6, wherein: in the step S3, the joint kinetic energy characteristic calculation formula is
Figure FDA0002294066870000022
The formula represents the kinetic energy characteristic value corresponding to the ith joint point in the t frame, wherein c is a kinetic energy parameter, and delta t is a frame difference and represents the change condition of the kinetic energy between two frames; loci,tAnd the coordinate position of the human skeletal joint point numbered i in the t frame.
8. The motion evaluation method based on human body three-dimensional joint point detection according to claim 1, characterized in that: in step S4, the method specifically includes the following steps:
s41: and (5) constructing a key frame action similarity contrast model by using the features extracted in the step (S3) and fusing multiple features, and providing a motion vector feature similarity function:
Figure FDA0002294066870000023
the expression represents the similarity of the motion vector characteristics of the video to be detected and the standard video in the t frames, wherein Vi,tIs the ith motion vector characteristic value V 'of the video to be detected'i,tThe motion vector is the ith motion vector characteristic value of the standard video, and n is the number of the motion vector characteristics;
s42: and (3) providing a joint kinetic energy similarity function:
Figure FDA0002294066870000024
the formula represents the joint kinetic energy characteristic similarity of the video to be detected and the standard video in the t frame, Ej,tIs the j motion vector characteristic value, E 'of the video to be detected'j,tGiving different weights w to j joint kinetic energy characteristic values of the standard video and m to the number of joint kinetic energy characteristics according to different action types;
s43: constructing a key frame action similarity contrast model based on the similarity function:
d(ft,f′t)=SVt+SEt
the above formula represents the key frame f of the video to be detectedtAnd standard video key frame f'tThe attitude similarity of (2);
s44: obtaining an action sequence similarity evaluation function based on the similarity of the key frames:
Figure FDA0002294066870000031
and D (X, Y) is the motion difference distance between the video X to be detected and the standard video Y, the smaller the value is, the more similar the motion is, and k is the number of key frames.
CN201911193095.2A 2019-11-28 2019-11-28 Motion evaluation method based on human body three-dimensional joint point detection Active CN111144217B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911193095.2A CN111144217B (en) 2019-11-28 2019-11-28 Motion evaluation method based on human body three-dimensional joint point detection

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911193095.2A CN111144217B (en) 2019-11-28 2019-11-28 Motion evaluation method based on human body three-dimensional joint point detection

Publications (2)

Publication Number Publication Date
CN111144217A true CN111144217A (en) 2020-05-12
CN111144217B CN111144217B (en) 2022-07-01

Family

ID=70517314

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911193095.2A Active CN111144217B (en) 2019-11-28 2019-11-28 Motion evaluation method based on human body three-dimensional joint point detection

Country Status (1)

Country Link
CN (1) CN111144217B (en)

Cited By (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111563487A (en) * 2020-07-14 2020-08-21 平安国际智慧城市科技股份有限公司 Dance scoring method based on gesture recognition model and related equipment
CN111931804A (en) * 2020-06-18 2020-11-13 南京信息工程大学 RGBD camera-based automatic human body motion scoring method
CN111967407A (en) * 2020-08-20 2020-11-20 咪咕互动娱乐有限公司 Action evaluation method, electronic device, and computer-readable storage medium
CN111985853A (en) * 2020-09-10 2020-11-24 成都拟合未来科技有限公司 Interactive practice ranking evaluation method, system, terminal and medium
CN112085105A (en) * 2020-09-10 2020-12-15 上海庞勃特科技有限公司 Motion similarity evaluation method based on human body shape and posture estimation
CN112205979A (en) * 2020-08-18 2021-01-12 同济大学 Device and method for measuring mechanical energy of moving human body in real time
CN112464847A (en) * 2020-12-07 2021-03-09 北京邮电大学 Human body action segmentation method and device in video
CN112487965A (en) * 2020-11-30 2021-03-12 重庆邮电大学 Intelligent fitness action guiding method based on 3D reconstruction
CN112528823A (en) * 2020-12-04 2021-03-19 燕山大学 Striped shark movement behavior analysis method and system based on key frame detection and semantic component segmentation
CN112582064A (en) * 2020-11-05 2021-03-30 中国科学院深圳先进技术研究院 Action evaluation method, device, equipment and storage medium
CN112842261A (en) * 2020-12-30 2021-05-28 西安交通大学 Intelligent evaluation system for three-dimensional spontaneous movement of infant based on complex network
CN112989121A (en) * 2021-03-08 2021-06-18 武汉大学 Time sequence action evaluation method based on key frame preference
CN113033501A (en) * 2021-05-06 2021-06-25 泽恩科技有限公司 Human body classification method and device based on joint quaternion
CN113052138A (en) * 2021-04-25 2021-06-29 广海艺术科创(深圳)有限公司 Intelligent contrast correction method for dance and movement actions
CN113221815A (en) * 2021-05-25 2021-08-06 北京无垠创新科技有限责任公司 Gait identification method based on automatic detection technology of skeletal key points
CN113239797A (en) * 2021-05-12 2021-08-10 中科视语(北京)科技有限公司 Human body action recognition method, device and system
CN113327267A (en) * 2021-07-15 2021-08-31 东南大学 Action evaluation method based on monocular RGB video
CN113392745A (en) * 2021-06-04 2021-09-14 北京格灵深瞳信息技术股份有限公司 Abnormal action correction method, abnormal action correction device, electronic equipment and computer storage medium
CN113401774A (en) * 2021-05-26 2021-09-17 杭州法维莱科技有限公司 90 degrees vertical hinged door systems of elevator with prevent pressing from both sides function
CN113486771A (en) * 2021-06-30 2021-10-08 福州大学 Video motion uniformity evaluation method and system based on key point detection
CN113780206A (en) * 2021-09-16 2021-12-10 福建平潭瑞谦智能科技有限公司 Video image analysis processing method
WO2022028136A1 (en) * 2020-08-06 2022-02-10 上海哔哩哔哩科技有限公司 Movement extraction method and apparatus for dance video, computer device, and storage medium
CN114373531A (en) * 2022-02-28 2022-04-19 深圳市旗扬特种装备技术工程有限公司 Behavior action monitoring and correcting method, behavior action monitoring and correcting system, electronic equipment and medium
CN114534224A (en) * 2022-01-13 2022-05-27 上海凯视力成科技有限公司 Intelligent mirror for golf swing
US11625938B2 (en) 2020-12-29 2023-04-11 Industrial Technology Research Institute Method and device for detecting human skeletons
WO2023106201A1 (en) * 2021-12-09 2023-06-15 Necソリューションイノベータ株式会社 Play analysis device, play analysis method, and computer-readable storage medium
CN116805433A (en) * 2023-06-27 2023-09-26 北京奥康达体育科技有限公司 Human motion trail data analysis system
JP7367775B2 (en) 2019-12-24 2023-10-24 日本電気株式会社 Feature learning system, feature learning method and program

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105912985A (en) * 2016-04-01 2016-08-31 上海理工大学 Human skeleton joint point behavior motion expression method based on energy function
US20170119302A1 (en) * 2012-10-16 2017-05-04 University Of Florida Research Foundation, Incorporated Screening for neurological disease using speech articulation characteristics
WO2017115887A1 (en) * 2015-12-29 2017-07-06 경일대학교 산학협력단 Device for providing motion recognition-based game, method for same, and computer-readable recording medium on which said method is recorded
WO2018066359A1 (en) * 2016-10-07 2018-04-12 パイオニア株式会社 Examination device, examination method, computer program, and recording medium
CN110096950A (en) * 2019-03-20 2019-08-06 西北大学 A kind of multiple features fusion Activity recognition method based on key frame

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170119302A1 (en) * 2012-10-16 2017-05-04 University Of Florida Research Foundation, Incorporated Screening for neurological disease using speech articulation characteristics
WO2017115887A1 (en) * 2015-12-29 2017-07-06 경일대학교 산학협력단 Device for providing motion recognition-based game, method for same, and computer-readable recording medium on which said method is recorded
CN105912985A (en) * 2016-04-01 2016-08-31 上海理工大学 Human skeleton joint point behavior motion expression method based on energy function
WO2018066359A1 (en) * 2016-10-07 2018-04-12 パイオニア株式会社 Examination device, examination method, computer program, and recording medium
CN110096950A (en) * 2019-03-20 2019-08-06 西北大学 A kind of multiple features fusion Activity recognition method based on key frame

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
JIANGTAO LUO等: "Identity Based Approach Under a Unified Service Model for Secure Content Distribution in ICN", 《2018 1ST IEEE INTERNATIONAL CONFERENCE ON HOT INFORMATION-CENTRIC NETWORKING (HOTICN)》 *
JIANGTAO LUO等: "Identity Based Approach Under a Unified Service Model for Secure Content Distribution in ICN", 《2018 1ST IEEE INTERNATIONAL CONFERENCE ON HOT INFORMATION-CENTRIC NETWORKING (HOTICN)》, 17 August 2018 (2018-08-17) *
冯林等: "基于运动能量模型的人体运动捕捉数据库的检索", 《计算机辅助设计与图形学学报》 *
冯林等: "基于运动能量模型的人体运动捕捉数据库的检索", 《计算机辅助设计与图形学学报》, no. 08, 15 August 2007 (2007-08-15) *
石念峰等: "姿态估计和跟踪结合的运动视频关键帧提取", 《电视技术》 *
石念峰等: "姿态估计和跟踪结合的运动视频关键帧提取", 《电视技术》, 17 May 2017 (2017-05-17) *
许国良等: "基于深度图的三维激光雷达点云目标分割方法", 《中国激光》 *
许国良等: "基于深度图的三维激光雷达点云目标分割方法", 《中国激光》, 31 July 2019 (2019-07-31) *

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7367775B2 (en) 2019-12-24 2023-10-24 日本電気株式会社 Feature learning system, feature learning method and program
CN111931804A (en) * 2020-06-18 2020-11-13 南京信息工程大学 RGBD camera-based automatic human body motion scoring method
CN111931804B (en) * 2020-06-18 2023-06-27 南京信息工程大学 Human body action automatic scoring method based on RGBD camera
CN111563487A (en) * 2020-07-14 2020-08-21 平安国际智慧城市科技股份有限公司 Dance scoring method based on gesture recognition model and related equipment
WO2022028136A1 (en) * 2020-08-06 2022-02-10 上海哔哩哔哩科技有限公司 Movement extraction method and apparatus for dance video, computer device, and storage medium
CN112205979A (en) * 2020-08-18 2021-01-12 同济大学 Device and method for measuring mechanical energy of moving human body in real time
CN111967407A (en) * 2020-08-20 2020-11-20 咪咕互动娱乐有限公司 Action evaluation method, electronic device, and computer-readable storage medium
CN111967407B (en) * 2020-08-20 2023-10-20 咪咕互动娱乐有限公司 Action evaluation method, electronic device, and computer-readable storage medium
CN111985853A (en) * 2020-09-10 2020-11-24 成都拟合未来科技有限公司 Interactive practice ranking evaluation method, system, terminal and medium
CN112085105A (en) * 2020-09-10 2020-12-15 上海庞勃特科技有限公司 Motion similarity evaluation method based on human body shape and posture estimation
CN112582064A (en) * 2020-11-05 2021-03-30 中国科学院深圳先进技术研究院 Action evaluation method, device, equipment and storage medium
CN112487965A (en) * 2020-11-30 2021-03-12 重庆邮电大学 Intelligent fitness action guiding method based on 3D reconstruction
CN112487965B (en) * 2020-11-30 2023-01-31 重庆邮电大学 Intelligent fitness action guiding method based on 3D reconstruction
CN112528823A (en) * 2020-12-04 2021-03-19 燕山大学 Striped shark movement behavior analysis method and system based on key frame detection and semantic component segmentation
CN112464847A (en) * 2020-12-07 2021-03-09 北京邮电大学 Human body action segmentation method and device in video
US11625938B2 (en) 2020-12-29 2023-04-11 Industrial Technology Research Institute Method and device for detecting human skeletons
CN112842261B (en) * 2020-12-30 2021-12-28 西安交通大学 Intelligent evaluation system for three-dimensional spontaneous movement of infant based on complex network
CN112842261A (en) * 2020-12-30 2021-05-28 西安交通大学 Intelligent evaluation system for three-dimensional spontaneous movement of infant based on complex network
CN112989121B (en) * 2021-03-08 2023-07-28 武汉大学 Time sequence action evaluation method based on key frame preference
CN112989121A (en) * 2021-03-08 2021-06-18 武汉大学 Time sequence action evaluation method based on key frame preference
CN113052138B (en) * 2021-04-25 2024-03-15 广海艺术科创(深圳)有限公司 Intelligent contrast correction method for dance and movement actions
CN113052138A (en) * 2021-04-25 2021-06-29 广海艺术科创(深圳)有限公司 Intelligent contrast correction method for dance and movement actions
CN113033501A (en) * 2021-05-06 2021-06-25 泽恩科技有限公司 Human body classification method and device based on joint quaternion
CN113239797A (en) * 2021-05-12 2021-08-10 中科视语(北京)科技有限公司 Human body action recognition method, device and system
CN113221815A (en) * 2021-05-25 2021-08-06 北京无垠创新科技有限责任公司 Gait identification method based on automatic detection technology of skeletal key points
CN113401774A (en) * 2021-05-26 2021-09-17 杭州法维莱科技有限公司 90 degrees vertical hinged door systems of elevator with prevent pressing from both sides function
CN113392745A (en) * 2021-06-04 2021-09-14 北京格灵深瞳信息技术股份有限公司 Abnormal action correction method, abnormal action correction device, electronic equipment and computer storage medium
CN113486771A (en) * 2021-06-30 2021-10-08 福州大学 Video motion uniformity evaluation method and system based on key point detection
CN113486771B (en) * 2021-06-30 2023-07-07 福州大学 Video action uniformity evaluation method and system based on key point detection
CN113327267A (en) * 2021-07-15 2021-08-31 东南大学 Action evaluation method based on monocular RGB video
CN113780206A (en) * 2021-09-16 2021-12-10 福建平潭瑞谦智能科技有限公司 Video image analysis processing method
WO2023106201A1 (en) * 2021-12-09 2023-06-15 Necソリューションイノベータ株式会社 Play analysis device, play analysis method, and computer-readable storage medium
CN114534224A (en) * 2022-01-13 2022-05-27 上海凯视力成科技有限公司 Intelligent mirror for golf swing
CN114373531B (en) * 2022-02-28 2022-10-25 深圳市旗扬特种装备技术工程有限公司 Behavior action monitoring and correcting method, behavior action monitoring and correcting system, electronic equipment and medium
CN114373531A (en) * 2022-02-28 2022-04-19 深圳市旗扬特种装备技术工程有限公司 Behavior action monitoring and correcting method, behavior action monitoring and correcting system, electronic equipment and medium
CN116805433A (en) * 2023-06-27 2023-09-26 北京奥康达体育科技有限公司 Human motion trail data analysis system
CN116805433B (en) * 2023-06-27 2024-02-13 北京奥康达体育科技有限公司 Human motion trail data analysis system

Also Published As

Publication number Publication date
CN111144217B (en) 2022-07-01

Similar Documents

Publication Publication Date Title
CN111144217B (en) Motion evaluation method based on human body three-dimensional joint point detection
Kamal et al. A hybrid feature extraction approach for human detection, tracking and activity recognition using depth sensors
CN109934111B (en) Fitness posture estimation method and system based on key points
CN106650687B (en) Posture correction method based on depth information and skeleton information
Uddin et al. Human activity recognition using body joint‐angle features and hidden Markov model
CN109344694B (en) Human body basic action real-time identification method based on three-dimensional human body skeleton
CN105512621A (en) Kinect-based badminton motion guidance system
Yang et al. Human upper limb motion analysis for post-stroke impairment assessment using video analytics
CN114067358A (en) Human body posture recognition method and system based on key point detection technology
CN112668531A (en) Motion posture correction method based on motion recognition
CN113255522B (en) Personalized motion attitude estimation and analysis method and system based on time consistency
CN106846372B (en) Human motion quality visual analysis and evaluation system and method thereof
Elaoud et al. Skeleton-based comparison of throwing motion for handball players
CN106815855A (en) Based on the human body motion tracking method that production and discriminate combine
Ko et al. CNN and bi-LSTM based 3D golf swing analysis by frontal swing sequence images
CN109993116A (en) A kind of pedestrian mutually learnt based on skeleton recognition methods again
CN111539364B (en) Multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting
Almasi et al. Investigating the Application of Human Motion Recognition for Athletics Talent Identification using the Head-Mounted Camera
CN114191797B (en) Free skiing intelligent training system
CN114360052A (en) Intelligent somatosensory coach system based on AlphaPose and joint point angle matching algorithm
Zhao et al. Recognition of Volleyball Player's Arm Motion Trajectory and Muscle Injury Mechanism Analysis Based upon Neural Network Model
CN112364815A (en) High jump posture detection method for high jump athletes based on three-dimensional model
Murthy et al. DiveNet: Dive Action Localization and Physical Pose Parameter Extraction for High Performance Training
CN117671738B (en) Human body posture recognition system based on artificial intelligence
Yadav et al. An Efficient Deep Convolutional Neural Network Model For Yoga Pose Recognition Using Single Images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant