CN114092971A - Human body action evaluation method based on visual image - Google Patents

Human body action evaluation method based on visual image Download PDF

Info

Publication number
CN114092971A
CN114092971A CN202111423509.3A CN202111423509A CN114092971A CN 114092971 A CN114092971 A CN 114092971A CN 202111423509 A CN202111423509 A CN 202111423509A CN 114092971 A CN114092971 A CN 114092971A
Authority
CN
China
Prior art keywords
rotation angle
counterclockwise rotation
human body
action evaluation
angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111423509.3A
Other languages
Chinese (zh)
Inventor
仲元红
钟代笛
徐乾锋
冉琳
王新月
郭雨薇
魏晓燕
赵艳霞
黄智勇
周庆
葛亮
唐枋
刘继武
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chongqing University
Original Assignee
Chongqing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chongqing University filed Critical Chongqing University
Priority to CN202111423509.3A priority Critical patent/CN114092971A/en
Publication of CN114092971A publication Critical patent/CN114092971A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of computer vision image processing, in particular to a human body action evaluation method based on a vision image, which comprises the following steps: acquiring a video to be tested of a tester; performing skeleton analysis and posture analysis on a video frame of a video to be detected to generate a corresponding human body key point coordinate graph; calculating corresponding action evaluation auxiliary information based on the human body key point coordinate graph and corresponding action evaluation standards; and finishing the action evaluation based on the action evaluation auxiliary information and the corresponding action evaluation decision basis to generate a corresponding action evaluation result. The human body action evaluation method can be suitable for various action evaluations, so that the human body action evaluation efficiency can be improved.

Description

Human body action evaluation method based on visual image
Technical Field
The invention relates to the technical field of computer vision image processing, in particular to a human body action evaluation method based on a vision image.
Background
With the wide application of internet big data information technology, the application scenes of human behavior detection and identification technology based on visual images are more and more. By analyzing the action characteristics of human body such as expression, posture and the like, the human body behavior detection and prediction device can provide rich identification characteristic information for the technical application of people in public places or specific activity spaces, and is an important component of human activity big data information.
For example, in the fields of sports competition and health screening, the action of a human body needs to be recognized so as to evaluate the standard situation of the action. Action evaluation needs to be performed based on action evaluation criteria and action evaluation decision grounds. An early commonly used assessment method was that the assessor visually observed the movements of the tester and manually compared the movements of the tester to a standard movement pattern to give a score; meanwhile, an evaluator holds the camera for video extraction, and then stores the video screenshot as a backup. The existing method does not waste manpower and material resources, and the evaluation result is not objective and accurate enough due to subjectivity in manual judgment.
With the development of computer technology, methods for evaluating human body actions based on visual images appear in the prior art. For example, chinese patent publication No. CN110941990A discloses a method and apparatus for evaluating human body actions based on skeletal key points, which includes: acquiring a motion picture of a target main body in the motion process of a human body; extracting the skeleton key point coordinates of the action of the target subject according to the action picture; and inputting the coordinates of the bone key points into a pre-trained evaluation model to evaluate the action of the target subject, wherein the evaluation model evaluates the action of the human body based on the human body posture azimuth calculated by the coordinates of the bone key points.
The human body action evaluation method in the existing scheme calculates the corresponding human body posture azimuth angle based on the skeleton key points, and further realizes the evaluation of the human body action. However, the applicant finds that the human body posture azimuth angle is used as auxiliary information for motion estimation, and the human body posture azimuth angle can only be generally applied to the estimation of a certain corresponding motion, but is difficult to be applied to the estimation of a plurality of different motions. Because, when evaluating some actions, it is also necessary to calculate the distance or position relationship between key points, and even to calculate the similarity between video frames, etc. However, what kind of action evaluation auxiliary information is specifically calculated to complete action evaluation is associated with corresponding action evaluation criteria and action evaluation decision basis, and there is no general evaluation method applicable to multiple action evaluations in the prior art, so that a corresponding dedicated evaluation method needs to be designed for each action, resulting in low efficiency of action evaluation. Therefore, how to design a general action evaluation method suitable for evaluation of multiple actions is an urgent technical problem to be solved.
Disclosure of Invention
Aiming at the defects of the prior art, the technical problems to be solved by the invention are as follows: how to provide a human body action evaluation method which can be suitable for various action evaluations, thereby improving the efficiency of human body action evaluation.
In order to solve the technical problems, the invention adopts the following technical scheme:
a human body action evaluation method based on visual images comprises the following steps:
s1: acquiring a video to be tested of a tester;
s2: performing skeleton analysis and posture analysis on a video frame of a video to be detected to generate a corresponding human body key point coordinate graph;
s3: calculating corresponding action evaluation auxiliary information based on the human body key point coordinate graph and corresponding action evaluation standards;
s4: and finishing the action evaluation based on the action evaluation auxiliary information and the corresponding action evaluation decision basis to generate a corresponding action evaluation result.
Preferably, in step S3, the motion estimation assistance information includes a counterclockwise rotation angle between the key points;
calculating the counterclockwise rotation angle by:
s301: acquiring keypoint coordinates A, B, C for calculating a counterclockwise rotation angle;
s302: computing corresponding keypoint vectors based on keypoint coordinates A, B, C
Figure BDA0003378245740000021
And a keypoint vector
Figure BDA0003378245740000022
S303: vector of key points
Figure BDA0003378245740000023
Rotate in the counterclockwise direction until the vector of the key point
Figure BDA0003378245740000024
Overlapping; then the key point vector is calculated
Figure BDA0003378245740000025
Rotation to keypoint vector
Figure BDA0003378245740000026
As a corresponding counter-clockwise rotation angle.
Preferably, in step S3, the motion estimation assistance information includes a similarity between a counterclockwise rotation angle to be measured in the video to be measured and a counterclockwise rotation angle of a corresponding template in the template video; and calculating the similarity between the counterclockwise rotation angle to be measured and the counterclockwise rotation angle of the corresponding template based on a dynamic time warping algorithm.
Preferably, the similarity between the counterclockwise rotation angle to be measured and the counterclockwise rotation angle of the corresponding template is calculated by the following steps:
s311: acquiring a counterclockwise rotation angle sequence P (P) to be detected1,p2,…,pn) And the corresponding template counterclockwise rotation angle sequence Q (Q)1,q2,…,qm);piRepresenting a to-be-detected anticlockwise rotation angle corresponding to the ith frame of video frame in the to-be-detected video; q. q.siRepresenting the anticlockwise rotation angle of the template corresponding to the ith frame of video frame in the template video;
s312: constructing an n multiplied by m two-dimensional matrix C based on the counterclockwise rotation angle sequence to be detected and the template counterclockwise rotation angle sequence; c (i, j) represents the Euclidean distance between the ith to-be-detected anticlockwise rotation angle in the to-be-detected anticlockwise rotation angle sequence and the jth template anticlockwise rotation angle in the template anticlockwise rotation angle sequence;
s313: in the two-dimensional matrix C, calculating the cumulative distance from the starting position C (0,0) to the end position C (n, m), and recording the corresponding matching path; then selecting a matching path corresponding to the minimum accumulated distance D as an optimal matching path, and calculating the path step number K of the optimal matching path;
s314: a corresponding similarity score is calculated based on the minimum cumulative distance D and the path step number K.
Preferably, the cumulative distance is calculated by the following formula:
d(i,j)=c(i,j)+min{d(i-1,j-1),d(i-1,j),d(i,j-1)};
selecting an optimal matching path through the following formula:
Figure BDA0003378245740000031
the similarity score is calculated by the following formula:
Figure BDA0003378245740000032
in the above formula: d (i, j) represents the cumulative distance traveled from the starting position C (0,0) to the ending position C (i, j); c. CkRepresents the kth element in the two-dimensional matrix C; s represents a similarity score; h represents an adjustment coefficient set to 0.2.
Preferably, the types of the anticlockwise rotation angles comprise an angle between a left forearm and a left forearm, an angle between a left forearm and a left shoulder, an angle between a left forearm and a trunk, an angle between a trunk and a left thigh, an angle between a left thigh and a left calf, an angle between a right forearm and a right forearm, an angle between a right shoulder and a right forearm, an angle between a trunk and a right thigh, and an angle between a right thigh and a right calf; when the similarity between the counterclockwise rotation angle to be detected and the counterclockwise rotation angle of the template is calculated, the similarity between the counterclockwise rotation angles of a certain type can be calculated only once.
Preferably, in step S3, when the motion estimation assistance information is calculated, the recommended key points are selected to participate in the calculation through the following steps:
s321: calculating the variance of each anticlockwise rotation angle in the human body key point coordinate graph;
s322: calculating a motion information proportion corresponding to the counterclockwise rotation angle based on the variance of each counterclockwise rotation angle;
s323: and selecting the key point corresponding to the anticlockwise rotation angle with the maximum motion information proportion as a recommended key point.
Preferably, the variance of the counterclockwise rotation angle is calculated by the following formula:
Figure BDA0003378245740000033
the motion information ratio of the counterclockwise rotation angle is calculated by the following formula:
Figure BDA0003378245740000034
in the above formula: sigma2A variance representing the counterclockwise rotation angle; r represents a counterclockwise rotation angle; u. ofrThe mean value of the anticlockwise rotation angle in the coordinate graph of the key points of the human body is represented; n represents the number of anticlockwise rotation angles in the human body key point coordinate graph; i isnRepresenting the motion information proportion of the nth anticlockwise rotation angle in the human body key point coordinate graph;
Figure BDA0003378245740000035
the variance of the nth anticlockwise rotation angle in the coordinate graph of the human body key point is represented; e denotes a natural constant.
Preferably, in step S3, the motion estimation assistance information includes euclidean distances between key points;
by the formula
Figure BDA0003378245740000041
Calculating Euclidean distances among the key points;
in the above formula: d (A, B) denotes the key point A (x)1,x2,…,xn) And B (y)1,y2,…,yn) The euclidean distance between them.
Preferably, in step S3, the motion estimation assistance information includes a positional relationship between the key points; the position relation among the key points comprises a slope and a difference.
Compared with the prior art, the human body action evaluation method has the following beneficial effects:
1. the invention can finish the evaluation of corresponding multiple actions according to the action evaluation standard and action evaluation decision basis of the corresponding action, namely, provides a human action evaluation method suitable for the evaluation of multiple actions, so that the corresponding evaluation method does not need to be designed for each action, and the efficiency of the human action evaluation can be improved.
2. According to the invention, the coordinate graph of the key point of the human body is generated in a skeleton analysis and posture analysis mode, and then the action evaluation auxiliary information is calculated by combining the corresponding action evaluation standard, and the action evaluation is completed by combining the action evaluation decision basis, so that the calculation of the action evaluation auxiliary information and the action evaluation thereof can be associated with the corresponding action evaluation standard and action evaluation decision basis, and the calculation accuracy of the action evaluation auxiliary information and the accuracy and effect of the action evaluation can be ensured.
Drawings
For purposes of promoting a better understanding of the objects, aspects and advantages of the invention, reference will now be made in detail to the present invention as illustrated in the accompanying drawings, in which:
FIG. 1 is a logic block diagram of a human motion assessment method;
FIG. 2 is a schematic illustration of ten counterclockwise rotational angles on a human body;
FIG. 3 is a schematic view of the limb angle between the right large arm and the right small arm;
fig. 4 is a schematic diagram showing the counterclockwise rotation angle between the right large arm and the right small arm.
Detailed Description
The following is further detailed by the specific embodiments:
example (b):
first, the meaning of the action evaluation criterion and the action evaluation decision basis will be explained.
Action evaluation criteria: refers to items that need to be evaluated when evaluating actions.
For example, in the case of deep squat, the action evaluation criteria include: 1) testing whether the rod is right above the head top; 2) whether the trunk is parallel to the shank or vertical to the ground; 3) whether the thigh is lower than the horizontal line when squatting; 4) whether the knees are kept in the same direction as the feet.
Action evaluation decision basis: refers to a scoring criterion at the time of action assessment.
Taking squat as an example, the action evaluation decision basis includes: 1) the test rod is arranged right above the top of the head, the trunk is parallel to the shank or vertical to the ground, the thigh is lower than the horizontal line when squatting, the directions of the knees and the feet are kept consistent, and 3 minutes are obtained; 2) the required action can not be completed or the heel is padded with a wooden plate to complete the required action, and the score is 2; 3) the heel lower pad upper template still can not finish the required action, and the score is 1; 4) pain was noted in any part of the body during the test, and a score of 0 was obtained.
Based on the above description, the present embodiment discloses a human body motion estimation method based on visual images.
As shown in fig. 1, the human body motion evaluation method based on visual images includes the following steps:
s1: acquiring a video to be tested of a tester;
s2: performing skeleton analysis and posture analysis on a video frame of a video to be detected to generate a corresponding human body key point coordinate graph; in the embodiment, skeleton analysis and posture analysis are performed on the video frame to be detected of the video to be detected through the AlphaPose model provided by Shanghai university of transportation.
S3: calculating corresponding action evaluation auxiliary information based on the human body key point coordinate graph and corresponding action evaluation standards;
s4: and finishing the action evaluation based on the action evaluation auxiliary information and the corresponding action evaluation decision basis to generate a corresponding action evaluation result.
In the invention, the evaluation corresponding to various actions can be completed according to the action evaluation standard and the action evaluation decision basis of the corresponding action, namely, a human action evaluation method suitable for evaluating various actions is provided, so that a corresponding evaluation method does not need to be designed for each action, and the efficiency of human action evaluation can be improved. Meanwhile, the human body key point coordinate graph is generated in a skeleton analysis and posture analysis mode, and then the action evaluation auxiliary information is calculated by combining the corresponding action evaluation standard, and the action evaluation is completed by combining the action evaluation decision basis, so that the calculation of the action evaluation auxiliary information and the action evaluation thereof can be associated with the corresponding action evaluation standard and action evaluation decision basis, and the calculation accuracy of the action evaluation auxiliary information and the accuracy and effect of the action evaluation can be ensured.
In the specific implementation process, the motion evaluation auxiliary information comprises the anticlockwise rotation angle between key points; referring to fig. 2, the types of the counterclockwise rotation angle include an angle between the left forearm and the left forearm, an angle between the left forearm and the left shoulder, an angle between the left forearm and the trunk, an angle between the trunk and the left thigh, an angle between the left thigh and the left calf, an angle between the right forearm and the right forearm, an angle between the right shoulder and the right forearm, an angle between the trunk and the right thigh, and an angle between the right thigh and the right calf.
Calculating the counterclockwise rotation angle by:
s301: acquiring keypoint coordinates A, B, C for calculating a counterclockwise rotation angle;
s302: computing corresponding keypoint vectors based on keypoint coordinates A, B, C
Figure BDA0003378245740000051
And a keypoint vector
Figure BDA0003378245740000052
S303: vector of key points
Figure BDA0003378245740000053
Rotate in the counterclockwise direction until the vector of the key point
Figure BDA0003378245740000054
Overlapping; then the key point vector is calculated
Figure BDA0003378245740000055
Rotation to keypoint vector
Figure BDA0003378245740000056
As a corresponding counter-clockwise rotation angle.
The two-dimensional posture is obtained by performing skeleton analysis and posture analysis on a video frame, and the key points in the human body key point coordinate graph are actually projections of the real posture on a two-dimensional plane, so that a simple limb angle is difficult to accurately represent a motion limb characteristic. As shown in fig. 3, the limb angle between the right large arm and the right small arm is the same when the right arm is bent at the front of the chest and at the side of the body, respectively. From a data point of view, the right arm's motion limb characteristics are the same, since the limb angle between the right big arm and the right small arm is the same, and in fact, there is a large difference between the two motions.
Therefore, the direction information, namely the rotation direction is added on the basis of the limb angle, so that the generated anticlockwise rotation angle has angle information and direction information (as shown in figure 4), the problem that part of posture information is lost when the real posture is projected to a two-dimensional plane can be solved, the characteristics of the action limb can be accurately represented, and the accuracy of human body action evaluation can be guaranteed. Meanwhile, the ten anticlockwise rotation angles designed by the invention can basically cover the important action limb characteristics of the human body posture, so that the effect of human body action evaluation can be further ensured.
In a specific implementation process, the action evaluation auxiliary information comprises the similarity between the anticlockwise rotation angle to be detected in the video to be detected and the anticlockwise rotation angle of the corresponding template in the template video; and calculating the similarity between the counterclockwise rotation angle to be measured and the counterclockwise rotation angle of the corresponding template based on a dynamic time warping algorithm. In this embodiment, when the similarity between the counterclockwise rotation angle to be measured and the counterclockwise rotation angle of the template is calculated, the similarity between the counterclockwise rotation angles of a certain type can only be calculated at a single time.
Calculating the similarity between the counterclockwise rotation angle to be measured and the counterclockwise rotation angle of the corresponding template through the following steps:
s311: acquiring a counterclockwise rotation angle sequence P (P) to be detected1,p2,…,pn) And the corresponding template counterclockwise rotation angle sequence Q (Q)1,q2,…,qm);piRepresenting a to-be-detected anticlockwise rotation angle corresponding to the ith frame of video frame in the to-be-detected video; q. q.siRepresenting the anticlockwise rotation angle of the template corresponding to the ith frame of video frame in the template video;
s312: constructing an n multiplied by m two-dimensional matrix C based on the counterclockwise rotation angle sequence to be detected and the template counterclockwise rotation angle sequence; c (i, j) represents the Euclidean distance between the ith to-be-detected anticlockwise rotation angle in the to-be-detected anticlockwise rotation angle sequence and the jth template anticlockwise rotation angle in the template anticlockwise rotation angle sequence;
s313: in the two-dimensional matrix C, calculating the cumulative distance from the starting position C (0,0) to the end position C (n, m), and recording the corresponding matching path; then selecting a matching path corresponding to the minimum accumulated distance D as an optimal matching path, and calculating the path step number K of the optimal matching path;
s314: a corresponding similarity score is calculated based on the minimum cumulative distance D and the path step number K.
In the specific implementation process, the accumulated distance is calculated by the following formula:
d(i,j)=c(i,j)+min{d(i-1,j-1),d(i-1,j),d(i,j-1)};
selecting an optimal matching path through the following formula:
Figure BDA0003378245740000061
the similarity score is calculated by the following formula:
Figure BDA0003378245740000071
in the above formula: d (i, j) represents the cumulative distance traveled from the starting position C (0,0) to the ending position C (i, j); c. CkRepresents the kth element in the two-dimensional matrix C; s represents a similarity score; h represents an adjustment coefficient set to 0.2. The DTW in the formula refers to the best matching path algorithm.
When the actual action is evaluated, the video to be tested needs to be compared with the template video, the similarity is calculated, and then the action evaluation is completed through the similarity. Video images are generally in the form of time series, so that the similarity of two time series needs to be calculated. However, since different people do the same action at different speeds, even the same person does the same action repeatedly, there will be a difference between body parts, resulting in that the lengths of the two time series will not be consistent basically. At this time, the similarity between time sequences cannot be effectively calculated by the conventional similarity calculation method based on the euclidean distance.
Therefore, the invention introduces a dynamic time warping algorithm to calculate the similarity between the video to be detected and the template video through the steps, calculates the accumulative minimum distance between the two sequences by adjusting the time corresponding relation (namely the length relation) of the two sequences to further search the optimal matching path, so that the similarity of the time sequences with different lengths of the two sequences can be calculated, and the accuracy of human body action evaluation can be ensured. Meanwhile, the anticlockwise rotation angle can accurately represent the characteristics of the action limbs, so that the similarity between the video to be detected and the template video can be effectively represented through the similarity between the anticlockwise rotation angles, and the evaluation of the action of the human body can be better assisted.
In the specific implementation process, when the action evaluation auxiliary information is calculated, the recommended key points are selected to participate in calculation through the following steps:
s321: calculating the variance of each anticlockwise rotation angle in the human body key point coordinate graph;
s322: calculating a motion information proportion corresponding to the counterclockwise rotation angle based on the variance of each counterclockwise rotation angle;
s323: and selecting the key point corresponding to the anticlockwise rotation angle with the maximum motion information proportion as a recommended key point.
The variance of the counterclockwise rotation angle is calculated by the following formula:
Figure BDA0003378245740000072
the motion information ratio of the counterclockwise rotation angle is calculated by the following formula:
Figure BDA0003378245740000073
in the above formula: sigma2A variance representing the counterclockwise rotation angle; r represents a counterclockwise rotation angle; u. ofrThe mean value of the anticlockwise rotation angle in the coordinate graph of the key points of the human body is represented; n represents the number of anticlockwise rotation angles in the human body key point coordinate graph; i isnRepresenting the motion information proportion of the nth anticlockwise rotation angle in the human body key point coordinate graph;
Figure BDA0003378245740000074
the variance of the nth anticlockwise rotation angle in the coordinate graph of the human body key point is represented; e denotes a natural constant.
In the actual movement evaluation, most movements only have a small part of limbs to perform main movement, and the movement amplitude of other limbs is not large or not. The angle change range of the limbs which do main movement is large, the change range of the limbs which do not do movement is small, and the limbs which do main movement are generally considered when the movement evaluation is carried out.
Therefore, the recommended key points with large motion amplitude are selected to participate in calculation in a mode of calculating the variance of the anticlockwise rotation angle, the motion information proportion and the Euclidean distance difference value between the key points, and on one hand, the recommended key points can accurately reflect limbs which are mainly moved, so that the accuracy of human body motion evaluation can be guaranteed; on the other hand, the key points corresponding to the limbs which do not move or do not move do not participate in calculation, so that the calculation amount of human body action evaluation can be reduced.
In the specific implementation process, the action evaluation auxiliary information comprises Euclidean distances among key points;
by the formula
Figure BDA0003378245740000081
Calculating Euclidean distances among the key points;
in the above formula: d (A, B) denotes the key point A (x)1,x2,…,xn) And B (y)1,y2,…,yn) The euclidean distance between them.
In the specific implementation process, the action evaluation auxiliary information comprises the position relation among key points; the position relation among the key points comprises a slope and a difference.
The invention can also calculate the Euclidean distance between key points, the position relation between the key points and other motion evaluation auxiliary information to assist in finishing motion evaluation, so that the invention can be better applied to evaluation of various motions, thereby ensuring the efficiency and the accuracy of human motion evaluation.
Specifically, when a human body key point coordinate graph is generated, firstly, a corresponding video frame is input into a pre-trained attitude estimation model, and a corresponding heat graph is output; then calculating the coordinates of the key points through the heat map to obtain a corresponding coordinate map of the key points of the human body;
when the attitude estimation model is trained, acquiring an attitude data set for training; then, converting labels marked in advance on the posture data set training diagram into corresponding heat diagram labels to obtain a corresponding label heat diagram; finally, training a posture estimation model based on the label heat map;
when generating the tag heat map, first, the size (W) of the tag heat map is seth×Hh) To generate a size Wh×HhThen calculating the heat distribution of the pre-labeled labels on the label heat map by the following formula to generate a corresponding label heat map;
Figure BDA0003378245740000082
when calculating the coordinates of the key points, obtaining the coordinates with the size of Wh×HhAnd reducing the heat map into 1 × Wh*Hh(ii) a Then calculating the maximum heat value index of the corresponding key point in the heat map by the following formula; and finally, calculating the coordinates of the corresponding key points by combining the index corresponding to the maximum heat value in the heat map and the size of the heat map, specifically, dividing the index by the corresponding WhThe quotient obtained is Wh×HhThe number of rows of the keypoint, x, the remainder being of the size Wh×HhThe number y of the rows of the key points, namely the coordinates (x, y) of the key points;
Figure BDA0003378245740000091
in the above formula: g represents a calorific value; x is the number of0、y0Representing real coordinates of a label marked in advance; x and y represent coordinates of the label in the label heat map; σ represents the standard deviation, and has a value of 2 or3; e represents a natural constant; i. j respectively represents an index of the one-dimensional heat map; x is the number ofi、xjRepresenting the heat values of the corresponding indices i and j; β represents a calibration coefficient.
It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that, while the invention has been described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims. Meanwhile, the detailed structures, characteristics and the like of the common general knowledge in the embodiments are not described too much. Finally, the scope of the claims should be determined by the content of the claims, and the description of the embodiments and the like in the specification should be used for interpreting the content of the claims.

Claims (10)

1. A human body action evaluation method based on visual images is characterized by comprising the following steps:
s1: acquiring a video to be tested of a tester;
s2: performing skeleton analysis and posture analysis on a video frame of a video to be detected to generate a corresponding human body key point coordinate graph;
s3: calculating corresponding action evaluation auxiliary information based on the human body key point coordinate graph and corresponding action evaluation standards;
s4: and finishing the action evaluation based on the action evaluation auxiliary information and the corresponding action evaluation decision basis to generate a corresponding action evaluation result.
2. The visual-image-based human motion estimation method according to claim 1, wherein: in step S3, the motion estimation assistance information includes a counterclockwise rotation angle between the key points;
calculating the counterclockwise rotation angle by:
s301: acquiring keypoint coordinates A, B, C for calculating a counterclockwise rotation angle;
s302: computing pairs based on keypoint coordinates A, B, CCorresponding keypoint vector
Figure FDA0003378245730000011
And a keypoint vector
Figure FDA0003378245730000012
S303: vector of key points
Figure FDA0003378245730000013
Rotate in the counterclockwise direction until the vector of the key point
Figure FDA0003378245730000014
Overlapping; then the key point vector is calculated
Figure FDA0003378245730000015
Rotation to keypoint vector
Figure FDA0003378245730000016
As a corresponding counter-clockwise rotation angle.
3. The visual-image-based human motion estimation method according to claim 2, wherein: in step S3, the motion estimation assistance information includes a similarity between the counterclockwise rotation angle to be detected in the video to be detected and the counterclockwise rotation angle of the corresponding template in the template video; and calculating the similarity between the counterclockwise rotation angle to be measured and the counterclockwise rotation angle of the corresponding template based on a dynamic time warping algorithm.
4. The visual-image-based human motion estimation method according to claim 3, wherein the similarity between the counterclockwise rotation angle to be measured and the counterclockwise rotation angle of the corresponding template is calculated by:
s311: acquiring a counterclockwise rotation angle sequence P (P) to be detected1,p2,…,pn) And the corresponding template counterclockwise rotation angle sequence Q (Q)1,q2,…,qm);piRepresenting a to-be-detected anticlockwise rotation angle corresponding to the ith frame of video frame in the to-be-detected video; q. q.siRepresenting the anticlockwise rotation angle of the template corresponding to the ith frame of video frame in the template video;
s312: constructing an n multiplied by m two-dimensional matrix C based on the counterclockwise rotation angle sequence to be detected and the template counterclockwise rotation angle sequence; c (i, j) represents the Euclidean distance between the ith to-be-detected anticlockwise rotation angle in the to-be-detected anticlockwise rotation angle sequence and the jth template anticlockwise rotation angle in the template anticlockwise rotation angle sequence;
s313: in the two-dimensional matrix C, calculating the cumulative distance from the starting position C (0,0) to the end position C (n, m), and recording the corresponding matching path; then selecting a matching path corresponding to the minimum accumulated distance D as an optimal matching path, and calculating the path step number K of the optimal matching path;
s314: a corresponding similarity score is calculated based on the minimum cumulative distance D and the path step number K.
5. The visual-image-based human motion estimation method according to claim 4, wherein:
the cumulative distance is calculated by the following formula:
d(i,j)=c(i,j)+min{d(i-1,j-1),d(i-1,j),d(i,j-1)};
selecting an optimal matching path through the following formula:
Figure FDA0003378245730000021
the similarity score is calculated by the following formula:
Figure FDA0003378245730000022
in the above formula: d (i, j) represents the cumulative distance traveled from the starting position C (0,0) to the ending position C (i, j); c. CkRepresents the kth element in the two-dimensional matrix C; s represents a similarity score; h represents an adjustment coefficient set to 0.2.
6. The visual-image-based human motion estimation method according to claim 3, wherein: the types of the anticlockwise rotation angles comprise an angle between a left small arm and a left large arm, an angle between the left large arm and a left shoulder, an angle between the left large arm and a trunk, an angle between the trunk and a left thigh, an angle between a left thigh and a left calf, an angle between a right large arm and a right small arm, an angle between a right shoulder and a right large arm, an angle between the trunk and a right thigh and an angle between a right thigh and a right calf;
when the similarity between the counterclockwise rotation angle to be detected and the counterclockwise rotation angle of the template is calculated, the similarity between the counterclockwise rotation angles of a certain type can be calculated only once.
7. The human motion estimation method based on visual images of claim 3, wherein in step S3, when the motion estimation assistance information is calculated, the recommended key points are selected to participate in the calculation through the following steps:
s321: calculating the variance of each anticlockwise rotation angle in the human body key point coordinate graph;
s322: calculating a motion information proportion corresponding to the counterclockwise rotation angle based on the variance of each counterclockwise rotation angle;
s323: and selecting the key point corresponding to the anticlockwise rotation angle with the maximum motion information proportion as a recommended key point.
8. The visual-image-based human motion estimation method of claim 7, wherein:
the variance of the counterclockwise rotation angle is calculated by the following formula:
Figure FDA0003378245730000023
the motion information ratio of the counterclockwise rotation angle is calculated by the following formula:
Figure FDA0003378245730000024
in the above formula: sigma2A variance representing the counterclockwise rotation angle; r represents a counterclockwise rotation angle; u. ofrThe mean value of the anticlockwise rotation angle in the coordinate graph of the key points of the human body is represented; n represents the number of anticlockwise rotation angles in the human body key point coordinate graph; i isnRepresenting the motion information proportion of the nth anticlockwise rotation angle in the human body key point coordinate graph;
Figure FDA0003378245730000031
the variance of the nth anticlockwise rotation angle in the coordinate graph of the human body key point is represented; e denotes a natural constant.
9. The visual-image-based human motion estimation method according to claim 1, wherein: in step S3, the motion estimation assistance information includes euclidean distances between key points;
by the formula
Figure FDA0003378245730000032
Calculating Euclidean distances among the key points;
in the above formula: d (A, B) denotes the key point A (x)1,x2,…,xn) And B (y)1,y2,…,yn) The euclidean distance between them.
10. The visual-image-based human motion estimation method according to claim 1, wherein: in step S3, the motion estimation assistance information includes the positional relationship between the key points; the position relation among the key points comprises a slope and a difference.
CN202111423509.3A 2021-11-26 2021-11-26 Human body action evaluation method based on visual image Pending CN114092971A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111423509.3A CN114092971A (en) 2021-11-26 2021-11-26 Human body action evaluation method based on visual image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111423509.3A CN114092971A (en) 2021-11-26 2021-11-26 Human body action evaluation method based on visual image

Publications (1)

Publication Number Publication Date
CN114092971A true CN114092971A (en) 2022-02-25

Family

ID=80305058

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111423509.3A Pending CN114092971A (en) 2021-11-26 2021-11-26 Human body action evaluation method based on visual image

Country Status (1)

Country Link
CN (1) CN114092971A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114373531A (en) * 2022-02-28 2022-04-19 深圳市旗扬特种装备技术工程有限公司 Behavior action monitoring and correcting method, behavior action monitoring and correcting system, electronic equipment and medium
CN116110584A (en) * 2023-02-23 2023-05-12 江苏万顶惠康健康科技服务有限公司 Human health risk assessment early warning system

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114373531A (en) * 2022-02-28 2022-04-19 深圳市旗扬特种装备技术工程有限公司 Behavior action monitoring and correcting method, behavior action monitoring and correcting system, electronic equipment and medium
CN116110584A (en) * 2023-02-23 2023-05-12 江苏万顶惠康健康科技服务有限公司 Human health risk assessment early warning system
CN116110584B (en) * 2023-02-23 2023-09-22 江苏万顶惠康健康科技服务有限公司 Human health risk assessment early warning system

Similar Documents

Publication Publication Date Title
CN111144217B (en) Motion evaluation method based on human body three-dimensional joint point detection
CN111437583B (en) Badminton basic action auxiliary training system based on Kinect
Li et al. [Retracted] Intelligent Sports Training System Based on Artificial Intelligence and Big Data
CN104598867B (en) A kind of human action automatic evaluation method and dancing points-scoring system
Zago et al. Multi-segmental movements as a function of experience in karate
CN113762133A (en) Self-weight fitness auxiliary coaching system, method and terminal based on human body posture recognition
CN114092971A (en) Human body action evaluation method based on visual image
Elaoud et al. Skeleton-based comparison of throwing motion for handball players
CN114093032A (en) Human body action evaluation method based on action state information
CN114092863A (en) Human body motion evaluation method for multi-view video image
CN111883229A (en) Intelligent movement guidance method and system based on visual AI
CN116844084A (en) Sports motion analysis and correction method and system integrating blockchain
Guo et al. PhyCoVIS: A visual analytic tool of physical coordination for cheer and dance training
CN114092862A (en) Action evaluation method based on optimal frame selection
Sarwar et al. Skeleton Based Keyframe Detection Framework for Sports Action Analysis: Badminton Smash Case
CN115953834A (en) Multi-head attention posture estimation method and detection system for sit-up
CN115497170A (en) Method for identifying and scoring formation type parachuting training action
Karunaratne et al. Objectively measure player performance on Olympic weightlifting
Outram et al. Test-retest reliability of segment kinetic energy measures in the golf swing
Bakchy et al. Limbs and muscle movement detection using gait analysis
Sharma et al. Digital Yoga Game with Enhanced Pose Grading Model
Tomas et al. Comparative Study on Model Skill of ERT and LSTM in Classifying Proper or Improper Execution of Free Throw, Jump Shot, and Layup Basketball Maneuvers
Bernardino et al. Bio-measurements estimation and support in knee recovery through machine learning
Li et al. Automatic Tracking Method for 3D Human Motion Pose Using Contrastive Learning
Yan et al. Research on the Application of Intelligent Sports Tracking System in Improving the Teaching Effect of University Physical Education

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination