CN116189301A - Standing long jump motion standardability assessment method based on attitude estimation - Google Patents

Standing long jump motion standardability assessment method based on attitude estimation Download PDF

Info

Publication number
CN116189301A
CN116189301A CN202310162836.0A CN202310162836A CN116189301A CN 116189301 A CN116189301 A CN 116189301A CN 202310162836 A CN202310162836 A CN 202310162836A CN 116189301 A CN116189301 A CN 116189301A
Authority
CN
China
Prior art keywords
key
actions
long jump
motion
normalization
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310162836.0A
Other languages
Chinese (zh)
Inventor
巩秀钢
耿新源
张嘉俊
王洪波
李君晓
韩翔飞
王鹏飞
刘珈瑞
李富豪
丁誉盈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University of Technology
Original Assignee
Shandong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University of Technology filed Critical Shandong University of Technology
Priority to CN202310162836.0A priority Critical patent/CN116189301A/en
Publication of CN116189301A publication Critical patent/CN116189301A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Computational Linguistics (AREA)
  • Image Analysis (AREA)

Abstract

A standing long jump motion standardability assessment method based on gesture estimation belongs to the technical field of human motion gesture correction. The method is characterized in that: the method comprises the following steps: step 1, capturing actions in a video stream by adopting a human body posture estimation algorithm based on deep learning, and estimating human body postures; step 2, determining key actions in the standing long jump, constructing a key point sequence of the key actions, comparing the key point sequence with a test video frame by frame, and finally selecting the tested key actions; and step 3, extracting key actions and carrying out normalization analysis on the key actions and the standard actions to obtain the similarity of the actions and the standard actions. In the standing long jump motion standardability assessment method based on gesture estimation, a gesture estimation algorithm based on deep learning is used for obtaining key points of long jump motions, the key motions of long jump are extracted by comparing similarity between each frame of motion and a target motion, standardability of long jump is analyzed, so that long jump motions can be evaluated conveniently, and improvement suggestions are provided.

Description

Standing long jump motion standardability assessment method based on attitude estimation
Technical Field
A standing long jump motion standardability assessment method based on gesture estimation belongs to the technical field of human motion gesture correction.
Background
In sports, whether the motion is normal or not directly determines the training effect of the user, and if the motion is not normal, the problems of conditional reflex of false motion, increased risk of sports injury, muscle coordination transfer, reduced proprioception capacity, reduced training efficiency and the like are caused. The standing long jump is an athletic project with strict requirements on motion standardization, and whether the motion standardization directly affects the training effect and the final result of athletes. In the traditional standing long jump training, a teacher generally carries out subjective evaluation on the action of a student, and the student cannot intuitively find whether the action of the student is standard or not, so an objective method is required for evaluating the normalization of the long jump action.
To analyze the normalization of the long jump motion, the motion gesture is first captured. Early human body action recognition is to need the assistance of external equipment to sense the change of human body posture so as to recognize human body actions. In the literature "Dowling AV, favre J, andricchi T P.Inertal Sensor-Based Feedback Can Reduce Key Risk Metrics for Anterior Cruciate Ligament Injury During Jump Landings [ J ]. American Journal of Sports Medicine,2012, 40 (5): 1075-1083, "knee bending angle, torso inclination, and thigh crown speed are detected using inertial sensor devices and used in anterior cruciate ligament injury detection identification. Swimming motion analysis was performed based on accelerometer-based microsensors in the technical solution disclosed in the literature "Pansiot, lo, yang G z.swimming Stroke Kinematic Analysis with BSN [ C ]// International Conference on Body Sensor networks.ieee,2010. The posture and basic movement index of the human body are identified by recording the pitch angle and the roll angle characteristics extracted from the acceleration, and a system for detecting swimming performance is developed and can be applied to guiding training. Therefore, in the current technical scheme aiming at motion gesture capture, although the motion of a human body can be accurately inferred by using a sensor, the sensor is required to be worn every time, so that the method has no convenience.
The normalization of an action is determined by calculating the similarity to a standard action, and many calculation methods are proposed with respect to the similarity between actions. In document "Jiang Ying, kinect-based sports training aid study [ J ]. Automated techniques and applications, 2019, 38 (9): and 4, calculating the similarity by adopting Euclidean distance, comparing the difference between the joint points according to the obtained coordinates of the joint points, and evaluating the action. However, the method requires that the two videos act at the same time point to correspond in height, and when the students are different in height or thickness, the calculation result is greatly offset due to the transformation of the coordinate positions, so that the Euclidean distance is gradually replaced by other methods. In the documents Yu Jinghua, wang Qing and Chen Hong, a motion estimation algorithm-based somatosensory dance interactive system [ J ]. Computer and modernization, 2018 (6): 9 ], key frames of reference motion are extracted by using interpolation wavelets, the reference motion and a comparison motion are matched through a DTW algorithm, and finally the average distance between the matched key frames is normalized, so that the similarity of 2 motion sequences is obtained. The similarity is calculated by adopting the DTW, so that the similarity between time sequences can be effectively evaluated, but only the rough similarity of actions can be given, and detailed suggestions of the actions cannot be given.
In view of the above-mentioned problems, a normative analysis method aiming at standing long jump is designed, and the technical scheme of evaluating the action of the fixed jump and giving the improvement suggestion becomes a problem to be solved urgently in the field.
Disclosure of Invention
The invention aims to solve the technical problems that: the method comprises the steps of obtaining key points of the long jump action by using a gesture estimation algorithm based on deep learning, extracting the key actions of the long jump by comparing the similarity between each frame of action and a target action, and analyzing the standardability of the long jump so as to evaluate the long jump action and give an improvement suggestion.
The technical scheme adopted for solving the technical problems is as follows: the standing long jump motion standardability assessment method based on the gesture estimation is characterized by comprising the following steps of: the method comprises the following steps:
step 1, capturing actions in a video stream by adopting a human body posture estimation algorithm based on deep learning, and estimating human body postures;
step 2, determining key actions in the standing long jump, constructing a key point sequence of the key actions, comparing the key point sequence with a test video frame by frame, and finally selecting the tested key actions;
and step 3, extracting key actions, and carrying out normalization analysis on the extracted key actions and standard actions to obtain the similarity between the actions of the tester and the standard actions.
Preferably, step 1 comprises the steps of:
step 1-1, determining a human body posture estimation data set;
step 1-2, performing downsampling processing on an input picture;
step 1-3, determining a human body posture estimation algorithm, inputting all key points in an image, and identifying the key points of the human body in a thermodynamic diagram mode;
step 1-4, different loss functions are used for the thermodynamic diagrams formed in step 1-3, respectively.
Preferably, in step 1-1, a subset of COCO (2017) and Al_Challenger (2018) are used and cropped so that there is only one person per graph.
Preferably, in step 1-3, a bottom-up method is used: firstly, identifying all key points in an input image, classifying the key points belonging to the same person, and when the identification of the key points of the human body is realized in a thermodynamic diagram mode, using MobileNet-v3 as a backbone network, and generating four thermodynamic diagrams by combining feature pyramid extraction features: the center point of the human body, the set of all human body key points, the offset of the key points and the quantization error of the key points.
Preferably, the key points include the human body: left and right eyes, left and right ears, nose, left and right shoulder joints, left and right elbow joints, left and right hands, left and right hip joints, left and right knee joints, and left and right feet.
Preferably, step 3 comprises the steps of:
step 3-1, calculating the normalization of the local joint;
step 3-2, calculating the normalization of the overall action;
step 3-3, calculating normalization of foot track;
step 3-4, action normalization scoring.
Preferably, in step 3-1, when the motion similarity is determined, a dynamic programming method is used to calculate the similarity of the motion, and the angles of the joints are realized by adopting a cosine similarity calculation method.
Preferably, in step 3-2, according to the features of the standing long jump project, a feature index with higher priority is given to the features with larger priority in combination with a priority matching rule, and finally, feature similarity is obtained according to the feature index to be used as an evaluation result, so as to obtain the overall similarity between the skeleton key point vectors of the gesture to be measured and the standard gesture:
C i =(1-λ) w d+λ(|cos(a i )|+1) q
wherein C is i Characteristics of each jointSyndrome similarity, cos (. Alpha.) i ) The cosine similarity of each joint is represented, d represents the DTW values of two actions, lambda E (0, 1), the optimal value is 0.73, and w and q are characteristic indexes.
Preferably, in step 3-4, the motion normalization calculation uses a step-type score, and the feature cosine similarity of each joint is weighted with the integral difference value of the foot track:
Figure BDA0004094781430000031
Figure BDA0004094781430000032
wherein S is i Representing the ladder scores of the joints C i The characteristic similarity of each joint is represented, w and q are characteristic indexes, the highest score of each joint is 20, the score of each joint is accumulated and then the integral difference value of the whole action is subtracted to obtain the score of the key action, k represents a score threshold value, a represents a jump point, and b represents a landing point.
Compared with the prior art, the invention has the following beneficial effects:
in the standing long jump motion standardability assessment method based on gesture estimation, a gesture estimation algorithm based on deep learning is used for obtaining key points of long jump motions, the key motions of long jump are extracted by comparing similarity between each frame of motion and a target motion, standardability of long jump is analyzed, so that long jump motions can be evaluated conveniently, and improvement suggestions are provided.
Drawings
Fig. 1 is a flow chart of a standing long jump motion normalization evaluation method based on gesture estimation.
Fig. 2 is a graph of standing-jump motion normalization assessment method preparation motion DTW values based on pose estimation.
Fig. 3 is a graph of the overhead motion DTW values of the standing-jump motion normalization assessment method based on pose estimation.
Fig. 4 is a graph of standing-jump motion normalization assessment method landing motion DTW values based on pose estimation.
Fig. 5 is a graph of buffer motion DTW values for a standing-jump motion normalization evaluation method based on pose estimation.
Fig. 6 is a graph of foot keypoints before eliminating non-jump-distance keypoints using a standing-jump motion normalization evaluation method based on pose estimation.
Fig. 7 is a foot keypoint distribution diagram after eliminating non-long jump keypoints by a standing long jump motion normalization evaluation method based on pose estimation.
Detailed Description
FIGS. 1-7 illustrate preferred embodiments of the present invention, and the present invention will be further described with reference to FIGS. 1-7.
As shown in fig. 1, a standing-off motion normalization evaluation method (hereinafter referred to as an evaluation method) based on gesture estimation includes the following steps:
step 1, estimating human body posture;
in the prior art, the method for estimating the human body posture comprises a top-down method such as CPM, alphaPose and a bottom-up method such as OpenPose, personLab, however, the conventional human body posture estimation scheme has the problems of incorrect positioning of key points and too slow model speed, so that in the estimation method, the motion in the video stream is captured by adopting a human body posture estimation algorithm based on deep learning in the estimation method aiming at the problems existing in the prior art. Visual-based intelligent human body posture estimation is the most challenging direction in the field of computer vision in recent years, and recognizes the behavior actions of people in video by detecting the actions of people in video sequences, extracting action features and learning action features. The method specifically comprises the following steps:
step 1-1, determining a human body posture estimation data set;
in the evaluation method, a subset of COCO (2017) and Al_Changer (2018) is adopted, clipping is carried out, only one person is used in each image, key points of human bones in 1362 standing long jump projects are marked and added into a data set, and meanwhile, mirror image processing is carried out on part of original images and then marking is carried out. By improving the quantity and quality of the marked pictures, the problem of positioning errors of key points is effectively reduced.
And step 1-2, performing downsampling processing on the input picture. In model training, the resolution of an input picture is generally reduced in order to reduce the training difficulty and the resource occupation rate, and the training is performed on the downsampled resolution. In order to enable the model to train in a heat map mode, the skeleton key point coordinates in the original image are converted into key point coordinates in the resolution after downsampling at the same time of downsampling, and the key point coordinates are converted into the heat map through Gaussian blur. After the heat map prediction is carried out, the resolution of the downsampled picture is restored to the original pixel, and the predicted key point coordinates are found in the original coordinate space.
And step 1-3, determining a human body posture estimation algorithm.
In the estimation method, a bottom-up method is adopted: all key points in the input image are identified first, and then the key points belonging to the same person are classified. The model adopts a thermodynamic diagram mode to realize the identification of key points of a human body, uses MobileNet-v3 as a main network, combines a Feature Pyramid (FPN) to extract features, and generates four thermodynamic diagrams: a Center point (Center) of a human body, a set of all human body key points (Keypoints), an Offset (Reg) of the key points, and a quantization error (Offset) of the key points.
And processing the obtained thermodynamic diagram, taking out 2K values corresponding to the coordinate positions of the 2K channels of the header_Reg, and adding the center point coordinates to obtain a rough key point position. Dividing the header_Keypoints by a weight matrix, and obtaining the maximum coordinates through K channels to obtain the refined 17 key point coordinates. The 17 key points are sequentially from top to bottom: the left and right eyes, left and right ears, nose, left and right shoulder joints, left and right elbow joints, left and right hands, left and right hip joints, left and right knee joints, and left and right feet are numbered 1 to 17, respectively.
Steps 1-4, a different loss function is used for each thermodynamic diagram.
In the present estimation method, the Loss function uses weighted MSE and L1 Loss. In the thermodynamic diagram formed in the steps 1-3, the Center point (Center) of the human body and the set (Keypoints) of all key points of the human body adopt weighted MSE, positive and negative samples are balanced, the Offset (Reg) of the key points and the quantization error (Offset) of the key points adopt L1 Loss, and finally, the Loss weights are set to be 1:1:1:1.
Step 2, extracting key actions;
a jump exercise may involve hundreds of frame actions, each of which may be analyzed not only to increase the complexity of the system but also to decrease the robustness of the system. It is necessary to extract key actions for long-jump before normative analysis of the actions is performed. In the estimation method, the take-off action, the emptying action, the landing action and the buffering action are respectively used as four key actions for completing one standing long jump process. And constructing a key point sequence of four actions, comparing the key point sequence with the test video frame by frame, and finally selecting the tested key action.
The comparison method of the test video and the key actions adopts a dynamic time warping method (Dynamic Time Warping, DTW), wherein the dynamic time warping (Dynamic Time Warping, DTW) is a nonlinear warping technology combining distance measurement and time warping and is used for measuring the similarity between two time sequences with unequal lengths:
the target point is defined as (x, y), and the DTW calculation formula of the target point is:
Figure BDA0004094781430000051
wherein K is [ max (x, y), x+y-1],W i Is the sequence distance value of each path, the optimal path W needs to satisfy the following conditions:
1. boundary: the start and end points of W must be the start and end points of the diagonal of the plane, i.e. W 1 =(1,1),W K =(x,y)。
2. Continuity: for two adjacent points W i (x i ,y i ) And W is i-1 (x i-1 ,y i-1 ) Wherein x is i -x i-1 ≤1,y i -y i-1 And 1. Ltoreq.1 so that adjacent dots are continuous.
3. Monotonicity: for two adjacent points W i (x i ,y i ) And W is i-1 (x i-1 ,y i-1 ) Wherein x is i -x i-1 ≥0,y i -y i-1 And (3) not backspacing the elements on W.
And calculating the DTW values of the action sequences and the standard action sequences in all frames, wherein the smaller the DTW value is, the more similar the two sequences are represented. As shown in fig. 2 to 5, the minimum value in the four curves is taken as a key action.
Step 3, action normalization analysis;
after the key actions are extracted, normalization analysis with the standard actions is required. In the estimation method, the standardability of the motion is judged by adopting the joint angle difference and the characteristic index, the segmentation of the test time sequence is realized by the joint angle difference, and then the distance difference between the test sequence and the standard sequence is calculated by adopting the DTW, so that the similarity of the motion of a tester and the standard motion is obtained. On the basis of joint angle calculation, a characteristic index is added, so that accuracy of similarity calculation is improved, and the method specifically comprises the following steps:
step 3-1, calculating the normalization of the local joint;
because of the difference of human body types, when the motion similarity is determined, a dynamic programming method is needed to calculate the similarity of the motion, and the angles of all joints are realized by adopting a cosine similarity calculation method. Cosine similarity is used to measure the difference in direction between two vectors: two n-dimensional vectors a= (a) 1 ,a 2 ,...,a n ),B=(b 1 ,b 2 ,...,b n ) The cosine of the included angle is within the range of [ -1,1]The cosine value is inversely related to the included angle of the vector, and when the two vectors are reversed, the cosine value is taken to be-1, and when the two vectors are in the same direction, the cosine value is taken to be 1.
Figure BDA0004094781430000061
Wherein: a is that i 、B i Representation ofThe dimension vector, i, takes values of 1-n, and the theta table looks like the angles of all joints.
Step 3-2, calculating the normalization of the overall action;
according to the features of the standing long jump project, combining with a priority matching rule, giving a feature index with higher priority to the features, and finally solving the feature similarity according to the feature index to serve as an evaluation result, so that the overall similarity between the skeleton key point vectors of the gesture to be measured and the standard gesture is obtained:
C i =(1-λ) w d+λ(|cos(a i )|+1) q
wherein C is i The smaller the value is, the more standard the motion is, which indicates the feature similarity of each joint. cos (alpha) i ) The cosine similarity of each joint is represented, d represents the DTW values of two actions, lambda E (0, 1), the optimal value is 0.73, and w and q are characteristic indexes.
Because the angles of the movements of each joint point are different, the obtained cosine similarity difference is larger, so that each joint of w and q corresponds to different values, and the weights of the scores are the same, as shown in the following table:
table 1 characteristic index for each joint
Figure BDA0004094781430000071
Step 3-3, calculating normalization of foot track;
and filtering the locus of the foot key points for the normalization of the whole long jump flow, and calculating the integral value between the foot key points and the lowest point horizontal line and the absolute value of the standard action difference to judge. Before calculation, the key points of the feet need to be filtered, and the key points in the non-jump process are removed, as shown in fig. 6-7.
Because each node is mutually discrete, the integral parabola is not a continuous smooth curve, the standard answer can not be directly calculated through a ready-made formula, and the approximate value can be obtained through the definition of Riemann integral, and the formula is as follows:
Figure BDA0004094781430000072
wherein the left of the about equation is the integral value of the continuous function and the right of the about equation is the integral approximation of the discrete function. Wherein x is i+1 -x i Is the difference of the abscissa values of adjacent points, f (t i ) Is x i+1 And x i For unifying the standard, a certain function value in the interval is defaulted to be the maximum value in the interval. Multiplying the values and adding the results from 0 to n-1 to obtain an integrated approximation value a representing the jump point and b representing the landing point.
And 3-4, scoring the motion normalization.
And the final motion normalization calculation adopts ladder type grading, and the characteristic cosine similarity of each joint and the integral difference value of the foot track are weighted. The formula is as follows:
Figure BDA0004094781430000073
Figure BDA0004094781430000074
to unify the scores, S i The method is characterized in that the step type grading of each joint point is realized, the highest grading of each joint is 20, the grading of each joint is accumulated and then the integral difference value of the whole action is subtracted to obtain the grading of the key action, and different grading thresholds k are set because the range of the movement angle of each joint is different, and the k is expressed by characteristic indexes w and q, so that the characteristic similarity C is realized i Can be represented by the same scoring criteria, a representing the take-off point and b representing the landing point.
The above description is only a preferred embodiment of the present invention, and is not intended to limit the invention in any way, and any person skilled in the art may make modifications or alterations to the disclosed technical content to the equivalent embodiments. However, any simple modification, equivalent variation and variation of the above embodiments according to the technical substance of the present invention still fall within the protection scope of the technical solution of the present invention.

Claims (9)

1. A standing long jump motion standardability assessment method based on gesture estimation is characterized in that: the method comprises the following steps:
step 1, capturing actions in a video stream by adopting a human body posture estimation algorithm based on deep learning, and estimating human body postures;
step 2, determining key actions in the standing long jump, constructing a key point sequence of the key actions, comparing the key point sequence with a test video frame by frame, and finally selecting the tested key actions;
and step 3, extracting key actions, and carrying out normalization analysis on the extracted key actions and standard actions to obtain the similarity between the actions of the tester and the standard actions.
2. The standing long jump motion normalization assessment method based on gesture estimation according to claim 1, wherein: step 1 comprises the following steps:
step 1-1, determining a human body posture estimation data set;
step 1-2, performing downsampling processing on an input picture;
step 1-3, determining a human body posture estimation algorithm, inputting all key points in an image, and identifying the key points of the human body in a thermodynamic diagram mode;
step 1-4, different loss functions are used for the thermodynamic diagrams formed in step 1-3, respectively.
3. The standing long jump motion normalization evaluation method based on the posture estimation according to claim 2, characterized in that: in step 1-1, a subset of COCO (2017) and Al_Changer (2018) are employed and cropped so that there is only one person per graph.
4. The standing long jump motion normalization evaluation method based on the posture estimation according to claim 2, characterized in that: in step 1-3, a bottom-up method is adopted: firstly, identifying all key points in an input image, classifying the key points belonging to the same person, and when the identification of the key points of the human body is realized in a thermodynamic diagram mode, using MobileNet-v3 as a backbone network, and generating four thermodynamic diagrams by combining feature pyramid extraction features: the center point of the human body, the set of all human body key points, the offset of the key points and the quantization error of the key points.
5. The standing jump motion normalization assessment method based on gesture estimation according to claim 1 or 4, characterized in that: key points include the human body: left and right eyes, left and right ears, nose, left and right shoulder joints, left and right elbow joints, left and right hands, left and right hip joints, left and right knee joints, and left and right feet.
6. The standing long jump motion normalization assessment method based on gesture estimation according to claim 1, wherein: step 3, comprising the following steps:
step 3-1, calculating the normalization of the local joint;
step 3-2, calculating the normalization of the overall action;
step 3-3, calculating normalization of foot track;
step 3-4, action normalization scoring.
7. The standing long jump motion normalization assessment method based on gesture estimation according to claim 6, wherein: in step 3-1, when the motion similarity is determined, a dynamic programming method is adopted to calculate the similarity of the motion, and the angles of all joints are realized by adopting a cosine similarity calculation method.
8. The standing long jump motion normalization assessment method based on gesture estimation according to claim 6, wherein: in step 3-2, according to the features of the standing long jump project, giving a feature index with higher priority to the features with larger priority in combination with a priority matching rule, and finally obtaining feature similarity as an evaluation result according to the feature index to obtain the overall similarity between the skeleton key point vectors of the gesture to be measured and the standard gesture:
C i =(1-λ) w d+λ(cos(a i )+1) q
wherein C is i Representing the feature similarity of each joint, cos (α i ) The cosine similarity of each joint is represented, d represents the DTW values of two actions, lambda E (0, 1), the optimal value is 0.73, and w and q are characteristic indexes.
9. The standing long jump motion normalization assessment method based on gesture estimation according to claim 6, wherein: in step 3-4, step-type scoring is adopted in motion normalization calculation, and the characteristic cosine similarity of each joint and the integral difference value of foot track are weighted:
Figure FDA0004094781420000021
Figure FDA0004094781420000022
wherein S is i Representing the ladder scores of the joints C i The characteristic similarity of each joint is represented, w and q are characteristic indexes, the highest score of each joint is 20, the score of each joint is accumulated and then the integral difference value of the whole action is subtracted to obtain the score of the key action, k represents a score threshold value, a represents a jump point, and b represents a landing point.
CN202310162836.0A 2023-02-24 2023-02-24 Standing long jump motion standardability assessment method based on attitude estimation Pending CN116189301A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310162836.0A CN116189301A (en) 2023-02-24 2023-02-24 Standing long jump motion standardability assessment method based on attitude estimation

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310162836.0A CN116189301A (en) 2023-02-24 2023-02-24 Standing long jump motion standardability assessment method based on attitude estimation

Publications (1)

Publication Number Publication Date
CN116189301A true CN116189301A (en) 2023-05-30

Family

ID=86438052

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310162836.0A Pending CN116189301A (en) 2023-02-24 2023-02-24 Standing long jump motion standardability assessment method based on attitude estimation

Country Status (1)

Country Link
CN (1) CN116189301A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117893953A (en) * 2024-03-15 2024-04-16 四川深蓝鸟科技有限公司 Soft digestive tract endoscope operation standard action evaluation method and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117893953A (en) * 2024-03-15 2024-04-16 四川深蓝鸟科技有限公司 Soft digestive tract endoscope operation standard action evaluation method and system
CN117893953B (en) * 2024-03-15 2024-06-14 四川深蓝鸟科技有限公司 Soft digestive tract endoscope operation standard action evaluation method and system

Similar Documents

Publication Publication Date Title
CN108256433B (en) Motion attitude assessment method and system
US8213678B2 (en) System and method of analyzing the movement of a user
CN109522850B (en) Action similarity evaluation method based on small sample learning
CN110674785A (en) Multi-person posture analysis method based on human body key point tracking
CN109635820B (en) Construction method of Parkinson's disease bradykinesia video detection model based on deep neural network
CN110688929B (en) Human skeleton joint point positioning method and device
CN110738154A (en) pedestrian falling detection method based on human body posture estimation
WO2017161734A1 (en) Correction of human body movements via television and motion-sensing accessory and system
CN111860157B (en) Motion analysis method, device, equipment and storage medium
CN113255522B (en) Personalized motion attitude estimation and analysis method and system based on time consistency
CN114550027A (en) Vision-based motion video fine analysis method and device
CN116189301A (en) Standing long jump motion standardability assessment method based on attitude estimation
CN112464793A (en) Method, system and storage medium for detecting cheating behaviors in online examination
CN115482580A (en) Multi-person evaluation system based on machine vision skeletal tracking technology
CN114566249B (en) Human motion safety risk assessment and analysis system
CN114973401A (en) Standardized pull-up assessment method based on motion detection and multi-mode learning
CN110956141A (en) Human body continuous action rapid analysis method based on local recognition
CN113221815A (en) Gait identification method based on automatic detection technology of skeletal key points
CN114639168B (en) Method and system for recognizing running gesture
CN115761901A (en) Horse riding posture detection and evaluation method
WO2021250786A1 (en) Decision program, decision device, and decision method
Connolly et al. Automated identification of trampoline skills using computer vision extracted pose estimation
CN116630551B (en) Motion capturing and evaluating device and method thereof
CN113408434B (en) Intelligent monitoring expression recognition method, device, equipment and storage medium
CN113408433B (en) Intelligent monitoring gesture recognition method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination