CN112309540B - Motion evaluation method, device, system and storage medium - Google Patents
Motion evaluation method, device, system and storage medium Download PDFInfo
- Publication number
- CN112309540B CN112309540B CN202011173864.5A CN202011173864A CN112309540B CN 112309540 B CN112309540 B CN 112309540B CN 202011173864 A CN202011173864 A CN 202011173864A CN 112309540 B CN112309540 B CN 112309540B
- Authority
- CN
- China
- Prior art keywords
- balance
- feature matrix
- balance feature
- action unit
- standard
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 230000033001 locomotion Effects 0.000 title claims abstract description 83
- 238000011156 evaluation Methods 0.000 title claims abstract description 56
- 239000011159 matrix material Substances 0.000 claims abstract description 202
- 230000009471 action Effects 0.000 claims abstract description 143
- 210000000988 bone and bone Anatomy 0.000 claims abstract description 76
- 239000013598 vector Substances 0.000 claims description 90
- 238000000034 method Methods 0.000 claims description 39
- 238000012549 training Methods 0.000 claims description 33
- 230000000694 effects Effects 0.000 claims description 7
- 230000008569 process Effects 0.000 claims description 7
- 230000003238 somatosensory effect Effects 0.000 claims description 6
- 238000010606 normalization Methods 0.000 claims description 3
- 238000011158 quantitative evaluation Methods 0.000 claims 2
- 230000000875 corresponding effect Effects 0.000 description 50
- 238000012545 processing Methods 0.000 description 12
- 238000009499 grossing Methods 0.000 description 9
- 230000008859 change Effects 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 238000012512 characterization method Methods 0.000 description 3
- 210000001503 joint Anatomy 0.000 description 3
- 238000011084 recovery Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000013136 deep learning model Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 230000011218 segmentation Effects 0.000 description 2
- 230000009466 transformation Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 210000003423 ankle Anatomy 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000004630 mental health Effects 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 210000004197 pelvis Anatomy 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000009885 systemic effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H20/00—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
- G16H20/30—ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biophysics (AREA)
- Multimedia (AREA)
- Medical Informatics (AREA)
- Computational Linguistics (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Human Computer Interaction (AREA)
- Public Health (AREA)
- Primary Health Care (AREA)
- Epidemiology (AREA)
- Physical Education & Sports Medicine (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
- Processing Or Creating Images (AREA)
Abstract
The embodiment of the invention discloses a motion evaluation method, a motion evaluation device, a motion evaluation system and a storage medium, wherein the motion evaluation method comprises the following steps: determining three-dimensional skeleton data corresponding to a depth video stream of a current action unit and two-dimensional skeleton data corresponding to a color video stream of the current action unit; based on the principle of maximum balance capability embodiment, determining a balance feature matrix sequence for representing the balance capability of the current action unit according to the three-dimensional bone data and/or the two-dimensional bone data, wherein the balance feature matrix sequence comprises balance feature matrixes at least at two moments; and determining the matching degree of each balance feature matrix in the balance feature matrix sequence and the corresponding standard balance feature matrix so as to determine the evaluation result of the current action unit. The problem of the lower evaluation accuracy of current motion evaluation device is solved.
Description
Technical Field
The embodiment of the invention relates to the field of sports equipment, in particular to a sports evaluation method, a device, a system and a storage medium.
Background
The Taiji boxing is a sport form with eastern square and ideas, and has a wide audience group. The Taiji boxing has excellent performance on balance capacity recovery, has the effect of repairing and nourishing the body and the mind, and gradually becomes the choice for improving the physical and mental health and the balance capacity of people. However, unscientific taiji boxing practice often causes a situation that the balance capacity is slightly improved, even the body is damaged, which also increases the obstruction for the traditional balance exercise to use in the balance capacity recovery.
In the process of realizing the invention, although the balance capacity training such as Taiji boxing and the like can meet the requirements of different balance capacity recovery groups, the evaluation accuracy of the motion evaluation device at the present stage is lower, so that the balance capacity training usually requires on-site guidance of professional instructors, and the common masses without the guidance of the professional instructors cannot benefit from the balance capacity training.
Disclosure of Invention
The embodiment of the invention provides a motion estimation method, a motion estimation device, a motion estimation system and a storage medium, which solve the problem of low estimation accuracy of the existing motion estimation device.
In a first aspect, an embodiment of the present invention provides a motion estimation method, including:
Determining three-dimensional skeleton data corresponding to a depth video stream of a current action unit and two-dimensional skeleton data corresponding to a color video stream of the current action unit;
Based on the principle of maximum balance capability embodiment, determining a balance feature matrix sequence for representing the balance capability of the current action unit according to the three-dimensional bone data and/or the two-dimensional bone data, wherein the balance feature matrix sequence comprises balance feature matrixes at least at two moments;
And determining the matching degree of each balance feature matrix in the balance feature matrix sequence and the corresponding standard balance feature matrix so as to determine the evaluation result of the current action unit.
In a second aspect, an embodiment of the present invention further provides a motion estimation apparatus, including:
the bone data module is used for determining three-dimensional bone data corresponding to the depth video stream of the current action unit and two-dimensional bone data corresponding to the color video stream of the current action unit;
The characteristic determining module is used for determining a balance characteristic matrix sequence for representing the balance capacity of the current action unit according to the three-dimensional bone data and/or the two-dimensional bone data based on the principle that the balance capacity embodying degree is maximum, wherein the balance characteristic matrix sequence comprises at least two balance characteristic matrixes at the moment;
and the evaluation module is used for determining the matching degree of each balance characteristic matrix in the balance characteristic matrix sequence and the corresponding standard balance characteristic matrix so as to determine the evaluation result of the current action unit.
In a third aspect, embodiments of the present invention also provide a motion estimation system, the system including:
one or more processors;
A storage means for storing one or more programs;
The one or more programs, when executed by the one or more processors, cause the one or more processors to implement the motion estimation method as described in any of the embodiments.
In a fourth aspect, embodiments of the present invention also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are configured to perform the motion estimation method according to any of the embodiments.
Compared with the prior art, the technical scheme of the motion estimation method provided by the embodiment of the invention is characterized in that based on the principle of maximum balance capability embodiment, three-dimensional skeleton data are selected to determine the balance feature matrix sequence for representing the balance capability of the current action unit, or two-dimensional skeleton data are selected to determine the balance feature matrix sequence for representing the balance capability of the current action unit, or three-dimensional skeleton data and two-dimensional skeleton data are simultaneously selected to determine the balance feature matrix sequence for representing the balance capability of the current action unit, so that the data processing capacity is properly reduced, the representation capability of the balance feature matrix sequence of the current action unit is ensured to the greatest extent, on the basis, the matching degree of each balance feature matrix in the balance feature matrix sequence and the corresponding standard balance feature matrix is determined, the evaluation result of the current action unit is determined according to the evaluation result, the accuracy of the evaluation result based on the balance feature matrix sequence can be remarkably improved, and the accuracy of balance training guidance is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required for the description of the embodiments will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of a motion estimation method according to a first embodiment of the present invention;
FIG. 2 is a front view of a body sensor placement position according to a first embodiment of the present invention;
FIG. 3 is a top view of a body sensor placement position according to a first embodiment of the present invention;
FIG. 4 is a schematic diagram of display content according to a first embodiment of the present invention;
FIG. 5 is a flowchart of a motion estimation method according to a second embodiment of the present invention;
FIG. 6 is a block diagram showing a motion estimation apparatus according to a third embodiment of the present invention;
fig. 7 is a block diagram of a motion estimation system according to a third embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described by means of implementation examples with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the described embodiments are some embodiments of the present invention, not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Example 1
Fig. 1 is a flowchart of a motion estimation method according to an embodiment of the present invention. The technical scheme of the embodiment is suitable for the situation that the motion evaluation result is automatically determined according to the depth video stream and the color video stream acquired by the motion sensor. The method can be implemented by the motion estimation device provided by the embodiment of the invention, and the device can be implemented in a software and/or hardware mode and is configured to be applied in a processor of a motion estimation system. The method specifically comprises the following steps:
S101, determining three-dimensional skeleton data corresponding to a depth video stream of a current action unit and two-dimensional skeleton data corresponding to a color video stream of the current action unit.
Wherein the action unit is preferably a minimum action unit, which is preferably determined based on the action of the professional athlete. The action unit can be a motion unit of motion projects such as Taiji boxing, eight-section brocade, five-poultry playing and the like. Taking Taiji boxing as an example to illustrate the determination method of the minimum action unit, the method can be selected as follows: the motion process of the Taiji boxing professional sportsman is obtained based on the motion capturing system and the depth vision sensor, and each motion is subjected to kinematic analysis according to the motion part and the balance capacity requirement, and is divided into at least two minimum motion units. The minimum action unit can embody the motion process of one or more parts with the shortest time interval of balancing capability. The motion capture system can be a high-precision inertial or optical motion capture system, and the corresponding sensor is placed near the joint point of a professional athlete during specific use.
Wherein, the joint points should contain mainstream human body movement joint points, and the number of the joint points should not be less than the number required by using deep learning model processing and Kinect identification. The joint point bone data (hereinafter referred to as bone data) includes, but is not limited to, a time sequence formed by balance characteristic information such as a distance, an angle, a relative coordinate, a joint speed, an acceleration, a rotation angle, an absolute coordinate and the like between joints in three-dimensional or two-dimensional coordinates, and the selection of each action unit needs to fully reflect the motion balance condition of the parts such as arms, trunk, pelvis and the like of a human body.
Wherein, the depth video stream and the color video stream are obtained by a motion sensor. In order to acquire complete unobstructed bone data, the present embodiment acquires user movement data through at least three motion sensing sensors, preferably equally spaced around the circumference of the user's movement area. For example, three kinects are horizontally placed on a contour plane of about 1m on a circle with a radius of 3m, with the main active area of the user as the center, and each Kinect and the circle center are connected at an angle of 120 ° in pairs, fig. 2 is a schematic diagram of Kinect placement, and fig. 3 is a top view of Kinect placement. And the RGB-D data acquisition array is formed by three Kinects, so that the RGB camera and the depth camera view angles of all the Kinects can completely cover the active area of a user. The Kinect which can cover most of the nodes of the user in the view angle in an extremely long time is taken as a main view angle Kinect (called a main Kinect), the coordinate system is a reference coordinate system for reconstructing three-dimensional bone data, and the rest Kinects are called auxiliary Kinects.
And determining three-dimensional bone data corresponding to each action unit according to the main depth video stream and at least two sets of auxiliary depth video streams of each action unit. Alternatively, kinect is calibrated using a 50cm by 50cm black and white grid cube. The black and white grid cube is placed in the center of the main active area of the user, and registration calculation is then performed through the black and white grid cube point sets in each Kinect view angle. Taking a target point set Q= { Q j(xj,yj,zj) |i=1, 2 … m } with the scale of m on the black-and-white grid cube in the main Kinect view angle, and taking a source point set P= { P i(xi,yi,zi) |i=1, 2 … n } with the scale of n on the black-and-white grid cube in the Kinect view angle, the conversion of the source point set coordinate to the target point set coordinate can be completed by the following formula:
Wherein, (X p,Yp,Zp) is the coordinate of the black-and-white grid cube calibration point in the auxiliary Kinect, (X q,Yq,Zq) is the coordinate of the black-and-white grid cube calibration point in the main Kinect, and R is the rotation matrix transformed from the auxiliary Kinect coordinate system to the main Kinect coordinate system, as follows:
T is a translation vector transformed from the auxiliary Kinect coordinate system to the main Kinect coordinate system, as follows:
wherein α, β, γ represent the rotation angles of the inter-joint vector along the x, y, z axes, respectively, and t x、ty、tz represents the translation vectors of the inter-joint vector along the x, y, z axes. In the point sets P and Q, the nearest point (P i,qi) is found according to a certain constraint, and the optimal matching parameters R, T are calculated so that the error function E (R, T) is minimum. Wherein the error function E (R, T) function is:
After the transformation matrix is determined, fitting the three-dimensional bones of the user acquired by different Kinect into a reference coordinate system under a main view angle through respective transformation matrices in a median filtering mode to obtain three-dimensional reconstructed three-dimensional bone data of the user.
And reading the color video streams acquired by each Kinect in real time to obtain a main color video stream and at least two sets of auxiliary color video streams, and determining at least three sets of two-dimensional skeleton data corresponding to each action unit according to the main color video stream and the at least two sets of auxiliary color video streams. The two-dimensional skeleton data of different groups correspond to different shooting directions, so that the two-dimensional feature vector which can best embody the balance capability of the corresponding action unit can be selected from the two-dimensional skeleton data of different shooting directions.
The embodiment respectively preprocesses the main color video stream and at least two sets of auxiliary color video streams of each action unit, and then inputs a trained deep learning model to obtain at least three sets of two-dimensional bone data of each action unit. In some embodiments, color video streams of different shooting directions for each action unit are processed using a OpenPose human keypoint estimation model to obtain at least two sets of two-dimensional bone data for each action unit. Wherein OpenPose is an open source library developed by university of Carniken (CMU) based on convolutional neural networks and supervised learning and framed caffe.
In some embodiments, bone data captured by different kinects are acquired by different acquisition servers, and three-dimensional bone data and two-dimensional bone data are acquired and processed in parallel in the same acquisition server in a multithreading mode, so that the time required for acquiring and processing the bone data is reduced, and a real-time effect is achieved.
S102, determining a balance characteristic matrix sequence for representing the balance capability of the current action unit according to three-dimensional bone data and/or two-dimensional bone data based on the principle of maximum balance capability embodiment, wherein the balance characteristic matrix sequence comprises balance characteristic matrixes at least at two moments.
The principle of maximum balance capability embodiment refers to that the balance capability of the minimum action unit is embodied to a larger extent or the maximum extent, that is, the determined balance characteristic matrix needs to reflect the balance capability of the corresponding action unit.
The two-dimensional bone data and/or the three-dimensional bone data map of the user is segmented into bone data of at least two minimum action units. And determining a balance characteristic matrix corresponding to each frame of two-dimensional bone data and/or each frame of three-dimensional bone data of each minimum action unit. It will be appreciated that each balanced feature matrix contains one or more balanced feature vectors, one balanced feature vector containing one or more feature values. The balance feature vector needs to be able to fully represent the balance capability difference of each minimum action unit. Such as two-dimensional bone joint distance, two-dimensional bone joint angle, two-dimensional bone joint velocity, etc. For example, the two-dimensional skeletal joint point distance balance feature vector includes a distance between two palms, and the like, which cannot fully represent that the balance feature of the minimum motion unit is not selected, such as a distance from the toe to the ankle.
It can be understood that, for any minimum action unit, if the three-dimensional skeleton data is more capable of reflecting the balance capability difference of the minimum action unit compared with other minimum action units compared with the two-dimensional skeleton data, the balance feature vector of the minimum action unit is determined by using the three-dimensional skeleton data; otherwise, determining a balance characteristic vector of the minimum action unit by using the two-dimensional bone data; if the two-dimensional bone data and the three-dimensional bone data can reflect the balance capability difference of the minimum motion unit compared with other minimum motions from different aspects, one or more groups of balance feature vectors are respectively determined by using the three-dimensional bone data and the two-dimensional bone data, and the one or more groups of balance feature vectors are used as the balance feature vectors of the minimum motion unit.
It can be understood that for any minimum action unit, each frame of two-dimensional bone data or three-dimensional bone data corresponds to a balance feature matrix, and each group of balance feature matrices are arranged according to a time sequence to obtain a balance feature matrix sequence. It is understood that a balanced feature matrix contains one or more balanced feature vectors and a balanced feature vector contains one or more balanced feature values. Wherein the balanced feature vector and its feature values have consistency in the time series of the whole action unit. For example, at any time of the action unit, i.e., for any balance feature matrix, the first feature vector V 0 includes features such as a 0 (distance between palms), a 1 (distance between soles), and the like, which are not changed.
It will be appreciated that the user may not match the standard actions for each of their respective action durations when making a Tai Jib punch, but that each of their respective actions is standard. Therefore, the embodiment adopts a DTW (DYNAMIC TIME WARPING ) algorithm to solve the problem of overall matching degree deviation caused by mismatching of the user action duration and the standard action duration. And matching the balance characteristic matrix sequence of the current action of the user with the corresponding standard balance characteristic sequence through a dynamic time warping algorithm to update the balance characteristic matrix sequence, so that the time length of the updated balance characteristic matrix sequence is the same as that of the standard balance characteristic matrix sequence.
The matching process can be selected as follows: the standard balance characteristic matrix sequence S of the minimum action unit A is m frames in length, the balance characteristic matrix sequence U of the user in the minimum action unit is n frames in length, the balance characteristic matrix of the ith frame is U i, an n multiplied by m matrix network T is constructed, and each matrix element coordinate (i, j) represents the alignment of points U i and S j, and the value isThe similarity between U i and S j is shown. Then defining an accumulated distance gamma (i, j) representing the shortest regular path from gamma (0, 0) to gamma (i, j), which can be calculated by the following formula:
γ(i,j)=d(ui,sj)+min{γ(i-1,j-1),γ(i-1,j),γ(i,j-1)}
And (3) obtaining gamma (M, n) and a minimum matching path M through a dynamic programming algorithm, optimizing the user balance characteristic matrix sequence U into a new balance characteristic matrix sequence U 'according to the minimum matching path, and enabling the time length of the new balance characteristic matrix sequence U' to be identical with that of the standard balance characteristic matrix sequence.
Wherein each action unit corresponds to a standard feature matrix sequence, which is determined based on a balance feature vector determined by two-dimensional bone data or three-dimensional bone data of a professional athlete.
S103, determining the matching degree of each balance feature matrix in the balance feature matrix sequence and the corresponding standard balance feature matrix so as to determine the evaluation result of the current action unit.
In order to improve the characterization capability of the balance feature matrix, the embodiment introduces a weight matrix. And the balance feature vector with strong characterization capability in the balance feature matrix is given a larger weight, and otherwise, the balance feature vector with weak characterization capability is given a smaller weight.
The method for determining the weight matrix comprises the following steps: according to the characteristic value of each balance characteristic vector in each balance characteristic matrix, determining the weight value of the characteristic value of each balance characteristic vector as the weight value of the corresponding weight vector in the change amplitude of the two-dimensional standard skeleton data and/or the three-dimensional standard skeleton data of the adjacent frames; the product of each weight vector and the corresponding weight coefficient is calculated to update each weight vector to determine the corresponding weight matrix. It is understood that for any balanced feature matrix, the weight coefficients between different balanced feature vectors may be the same or different, but the sum of the weight coefficients is 1. The weight coefficient is an empirical value, and the weight coefficient of the balance feature vector of the key part is higher than that of the balance feature vector of the general part.
Preferably, after determining the variation amplitude of the eigenvalues of the respective balanced eigenvectors in each balanced eigenvector matrix within the time (t, t+1), the variation amplitude is normalized to determine a weight value of the eigenvalue of each balanced eigenvector as a weight value of the corresponding weight vector. Taking inter-joint angle vectors of the balance feature matrix as an example, all inter-joint vectors are normalized at the angle difference θ i(θ1,θ2 formed by (t, t+1.), where,N is the number of eigenvalues contained in each eigenvector; then determining weight vectors w k(v1,v2 and … of the inter-joint vector included angles according to the normalization result, wherein k is the identification of the weight vectors; calculating the product of each weight vector in the weight matrix and the corresponding weight coefficient to update each weight vector to determine the corresponding weight matrix, which may be represented as w m(β1w1,β2w2, …), wherein β i is the weight coefficient corresponding to the balanced feature vector, and m is the identity of the weight matrix and the identity of its corresponding balanced feature matrix. After the weight matrices for each balanced feature matrix are determined, each weight matrix is summarized in time series into a dynamic weight set, which may be represented as D (W 1,W2).
Regarding the inter-joint vector angle, in the same frame of skeletal data, the coordinates joint i(xi,yi,zi) and joint j(xj,yj,zj of the neighboring joint nodes in the smallest action unit are taken). The vector pointing from joint i to joint j is inter-joint vector Bone i,t. Wherein ,Bonei,t(xi,yi,zi)=jointj(xj,yj,zj)–jointi(xi,yi,zi),Bonei,t forms an inter-joint vector angle θ i,t with the same inter-joint vector Bone i,t+1 in the three-dimensional Bone data of the subsequent frame, the inter-joint vector angle indicates the degree of angle change of the inter-joint vector in a frame time interval, which can be expressed as:
Wherein the origin of the inter-joint vector should be chosen as close as possible to the point of the torso. The different inter-joint vectors are linked end to end, and the start point of the set of linked inter-joint vectors must be a point where there is no absolute coordinate change between the two frames, which is called the dead point. The dead point should be in the middle sagittal plane of human body as far as possible without absolute coordinate change in extremely long time sequence, so as to avoid the great accumulated error of the terminal joint vector caused by overlong vector chain. Optional dead spots are articulation points on the spine without absolute coordinate changes, calcaneal joints without absolute coordinate changes in systemic motion, etc.
It can be understood that after the dynamic weight matrix of each action unit is determined, the action units can be classified according to the balanced feature vector with the maximum weight, or the action units can be classified according to the balanced capability represented by the balanced feature vector with the maximum weight.
And for any minimum action unit, after the weight matrix of each balance characteristic matrix is determined, the evaluation result of the minimum action unit can be determined. In one embodiment, at time t, the balance feature vector in the balance feature matrix is V u,i, the balance feature vector in the standard balance feature matrix is V s,i, the dynamic weight vector of the balance feature vector (V u,i) is w i, and hadamard products of V u,i、Vs,i and w i are calculated respectively to obtain an updated balance feature vector V 'u,i and an updated standard balance feature vector V' s,i.
V’u,i=Vu,i·wi
V’s,i=Vs,i·wi
The degree of match of V 'u,i and V' s,i is measured using a vector angle form. The method comprises the following steps:
Wherein n is the total number of balance feature vectors contained in the updated balance feature matrix, V ' u,i is the balance feature vector identified as i in the updated balance feature matrix, V ' s,i is the standard balance feature vector identified as i in the updated standard balance feature matrix, and β i is the weight coefficient of the balance feature vector V ' u,i.
Calculating the average value of the matching degree of all the balance feature matrixes to serve as the evaluation result of the current action unit, wherein the average value is specifically as follows:
Wherein t 0 is the number of balance feature matrices contained in the current balance feature matrix sequence. It can be appreciated that after the comprehensive matching degree of each minimum action unit of the training content is determined, the matching degree of the training content can be determined according to the evaluation result of each minimum action unit. For example, the average value of the evaluation results of each action unit is used as the matching degree of the training content.
In some embodiments, the balance characteristics of each action unit of the training content and the matching condition of time consumption, part or whole for completing the corresponding action are synthesized to give a quantized evaluation result of the training content.
In some embodiments, the motion estimation system may further determine training advice of next balance ability, such as training duration, training time interval, training action selection, training intensity, and the like, according to the change trend of the quantized estimation result of the user training over time.
In some embodiments, the evaluation of one or more balancing capacities is accomplished by customizing different combinations of Taiji punch balancing characteristics according to actual requirements in the Taiji punch balancing training.
In some embodiments, referring to fig. 4, in order to improve the training effect of the balance capability, when the user performs the balance capability training, a standard motion video for guiding the training is output, the standard motion video may collect real-time video of the taijiquan professional athlete through Kinect, obtain taijiquan standard motion video of a plurality of minimum motion units after segmentation and background processing, and play in a loop to guide the user to complete the motion of the minimum motion units. And outputting the real-time video of the user balance capability training acquired by Kinect and the evaluation result of each action unit while outputting the standard action video.
In some embodiments, real-time videos of user balance ability training and standard action videos of professional athletes are output on a display screen in front of an active area in a split screen mode in real time, and the matching degree of each action unit of the user and the standard action of the professional athletes is output on the upper left side of a large screen in real time, so that the user can clearly determine the difference between the action of the user and the standard action. The matching degree comprises, but is not limited to, comprehensive matching degree of the action units and three-dimensional or two-dimensional bone data matching degree of different parts.
In some embodiments, if it is detected that the matching degree of one or more action units completed by the user does not reach the preset matching degree threshold, a prompt message for prompting the user to repeatedly exercise the one or more action units is output until the matching degree of the one or more action units reaches the preset matching threshold, so as to ensure that the user completes training content satisfactorily each time, and obtain a corresponding training effect.
Compared with the prior art, the technical scheme of the motion estimation device provided by the embodiment of the invention is characterized in that based on the principle of maximum balance capability embodiment, three-dimensional skeleton data are selected to determine the balance feature matrix sequence for representing the balance capability of the current action unit, or two-dimensional skeleton data are selected to determine the balance feature matrix sequence for representing the balance capability of the current action unit, or three-dimensional skeleton data and two-dimensional skeleton data are simultaneously selected to determine the balance feature matrix sequence for representing the balance capability of the current action unit, so that the data processing capacity is properly reduced, the representation capability of the balance feature matrix sequence of the current action unit is ensured to the greatest extent, on the basis, the matching degree of each balance feature matrix in the balance feature matrix sequence and the corresponding standard balance feature matrix is determined, the evaluation result of the current action unit is determined according to the evaluation result, the accuracy of the evaluation result based on the balance feature matrix sequence can be remarkably improved, and the accuracy of balance training guidance is improved.
Example two
Fig. 5 is a flowchart of a motion estimation method according to a second embodiment of the present invention. The embodiment of the invention adds the preprocessing step of the balance characteristic matrix sequence on the basis of the embodiment.
Accordingly, the method of the present embodiment includes:
S201, determining three-dimensional skeleton data corresponding to a depth video stream of a current action unit and two-dimensional skeleton data corresponding to a color video stream of the current action unit.
S202, determining a balance characteristic matrix sequence for representing the balance capability of the current action unit according to three-dimensional bone data and/or two-dimensional bone data based on the principle of maximum balance capability embodiment, wherein the balance characteristic matrix sequence comprises balance characteristic matrixes at least at two moments.
S203, preprocessing the balance characteristic matrix sequence to update the balance characteristic matrix sequence.
After the balance characteristic matrix sequence is obtained, the actual measured bone data and the predicted bone data are combined to carry out three-time exponential smoothing processing on the balance characteristic matrix sequence of the current user bone data, so that interference caused by observation problems such as noise errors and the like on the actual bone data acquisition is reduced. The cubic exponential smoothing algorithm includes the following five formulas and a predictive model:
St=a·yt+(1-a)St-1
Wherein S t represents that the t-period smoothed value α is a smoothing constant, and y t is the t-period actual value, i.e. the equilibrium feature matrix value at the t moment. The initial condition S 1=y1 of the process is, Representing the first exponential smoothing value at phase t,/>The first exponential smoothing value in the t-th period is represented, b t、ct is a linear smoothing model parameter, F t+m is a predicted value in the t+m-th period, and m is a predicted advance period number. And (3) comparing the delay and covariance characteristics of the predicted value and the actual value, and adjusting the smoothing constant to obtain a proper smoothing prediction model. And the balance prediction model is adopted to complete the prediction and correction of the balance characteristic matrix sequence so as to update the balance characteristic matrix sequence.
S204, determining the matching degree of each balance feature matrix in the updated balance feature matrix sequence and the corresponding standard balance feature matrix so as to determine the evaluation result of the current action unit.
According to the motion estimation method provided by the embodiment of the invention, the balance feature matrix sequence of each action unit of the user is subjected to three times of exponential smoothing processing through the three times of exponential balancing algorithm, so that the influence of noise on each balance feature vector in the balance feature matrix sequence is reduced, the accuracy of the balance feature matrix sequence is improved, and the accuracy of motion estimation is improved.
Example III
Fig. 6 is a block diagram of a motion estimation device according to a third embodiment of the present invention. The apparatus is used to perform the motion estimation method provided in any of the above embodiments, and the apparatus may be implemented in software or hardware. The device comprises:
the bone data module 11 is configured to determine three-dimensional bone data corresponding to a depth video stream of a current action unit and two-dimensional bone data corresponding to a color video stream of the current action unit;
The feature determining module 12 is configured to determine, based on a principle of maximum balance capability embodiment, a balance feature matrix sequence for representing a balance capability of the current action unit according to the three-dimensional skeleton data and/or the two-dimensional skeleton data, where the balance feature matrix sequence includes balance feature matrices at least at two moments;
And the evaluation module 13 is used for determining the matching degree of each balance feature matrix in the balance feature matrix sequence and the corresponding standard balance feature matrix so as to determine the evaluation result of the current action unit.
Optionally, the feature determining module 12 is further configured to perform matching processing on the balanced feature matrix sequence of the current action unit and the corresponding standard balanced feature matrix sequence by using a dynamic time warping algorithm, so as to update the balanced feature matrix sequence, so that the time length of the updated balanced feature matrix sequence is the same as the time length of the standard balanced feature matrix sequence.
Optionally, the evaluation module is configured to calculate a product of each balance feature matrix in the balance feature matrix sequence and a corresponding weight matrix, so as to obtain an updated balance feature matrix sequence; calculating the product of each standard balance feature matrix in the standard balance feature matrix sequence and the corresponding weight matrix to obtain an updated standard balance feature matrix sequence; and determining the matching degree of each balance feature matrix in the updated balance feature matrix sequence and the corresponding standard balance feature matrix in the updated standard balance feature matrix sequence so as to determine the evaluation result of the current action unit.
The device also comprises a weight determining module, wherein the weight determining module is used for determining the weight value of the characteristic value of each balance characteristic vector as the weight value of the corresponding weight vector according to the characteristic value of each balance characteristic vector in each balance characteristic matrix and the change amplitude of the characteristic value of each balance characteristic vector in the two-dimensional standard bone data and/or the three-dimensional standard bone data of the adjacent frames; the product of each weight vector and the corresponding weight coefficient is calculated to update each weight vector to determine the corresponding weight matrix.
Optionally, the evaluation module 13 is configured to determine the matching degree between the balance feature matrix and the corresponding standard balance feature matrix by the following formula:
Calculating the average value of the matching degree of all the balance feature matrixes to serve as the evaluation result of the current action unit;
Wherein t is the identification of the balance feature matrix, n is the total number of balance feature vectors contained in the updated balance feature matrix, V ' u,i is the balance feature vector identified as i in the updated balance feature matrix, V ' s,i is the standard balance feature vector identified as i in the updated standard balance feature matrix, and beta i is the weight corresponding to V ' u,i.
Compared with the prior art, the technical scheme of the motion estimation device provided by the embodiment of the invention is characterized in that based on the principle of maximum balance capability embodiment, three-dimensional skeleton data are selected to determine the balance feature matrix sequence for representing the balance capability of the current action unit, or two-dimensional skeleton data are selected to determine the balance feature matrix sequence for representing the balance capability of the current action unit, or three-dimensional skeleton data and two-dimensional skeleton data are simultaneously selected to determine the balance feature matrix sequence for representing the balance capability of the current action unit, so that the data processing capacity is properly reduced, the representation capability of the balance feature matrix sequence of the current action unit is ensured to the greatest extent, on the basis, the matching degree of each balance feature matrix in the balance feature matrix sequence and the corresponding standard balance feature matrix is determined, the evaluation result of the current action unit is determined according to the evaluation result, the accuracy of the evaluation result based on the balance feature matrix sequence can be remarkably improved, and the accuracy of balance training guidance is improved.
The motion estimation device provided by the embodiment of the invention can execute the motion estimation method provided by any embodiment of the invention, and has the corresponding functional modules and beneficial effects of the execution method.
Example IV
Fig. 7 is a schematic structural diagram of a motion estimation system according to an embodiment of the present invention, as shown in fig. 7, the system includes a processor 201, a memory 202, an input device 203, and an output device 204; the number of processors 201 may be one or more, and one processor 201 is taken as an example in fig. 7; the processor 201, memory 202, input devices 203, and output devices 204 in the apparatus may be connected by a bus or other means, for example in fig. 7.
The memory 202 is used as a computer readable storage medium for storing software programs, computer executable programs, and modules, such as program instructions/modules (e.g., the bone data module 11, the feature determination module 12, and the assessment module 13) corresponding to the motion assessment method in the embodiment of the present invention. The processor 201 executes various functional applications of the device and data processing, i.e., implements the motion estimation method described above, by running software programs, instructions, and modules stored in the memory 202.
The memory 202 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for functions; the storage data area may store data created according to the use of the terminal, etc. In addition, memory 202 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, memory 202 may further include memory located remotely from processor 201, which may be connected to the device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The input means 203 may be used to receive entered numeric or character information and to generate key signal inputs related to user settings and function control of the device.
The output device 204 may comprise a display device such as a display screen, preferably a large-sized liquid crystal screen, of the user terminal, for displaying in real time the standard color video stream of each action unit, the real-time color video stream of the user's movement, and the real-time evaluation result of each action unit.
The system further comprises a somatosensory sensor group, wherein the somatosensory sensor group comprises at least three somatosensory sensors which are arranged on the circumference of the user activity area at equal intervals, such as three Kinects in figure 3, and the somatosensory sensor group is used for acquiring depth video streams and color video streams in the user movement process. The Kinect which can cover most user nodes in the view angle in a very long time is taken as a main view angle Kinect (called a main Kinect), the coordinate system is a reference coordinate system for reconstructing three-dimensional bone data, and the rest Kinects are called auxiliary Kinects. And all Kinects are connected through an audio line to complete the synchronization of clock signals.
The system preferably also includes gigabit switches and server clusters, etc. The server cluster comprises an acquisition server, the same storage server, a main control server and the like. Bone data captured by different Kinects are acquired by different acquisition servers, and three-dimensional bone data and two-dimensional bone data are acquired and processed in parallel in the same acquisition server in a multithreading mode, so that the time required by acquisition and processing is reduced, and a real-time effect is achieved.
The system adopts an interactive training evaluation virtual scene which is built based on a Unity3D platform and aims at providing an immersive training experience for Taiji boxing balance training evaluation users. Including virtual training scenarios, taijiquan standard action guidance, and user feedback. The virtual training scene includes a user feedback virtual scene that changes with the degree of training completion, including changes in background music. The user can perform scene interaction through gestures, the gestures of the user are obtained through the main view Kinect, and different feedback is made in the training scene according to preset gesture types, wherein the feedback comprises modification of background music volume, scene switching and the like. Standard action videos of Taijiquan professional athletes are collected through Kinect, color video streams of Taijiquan standard actions of a plurality of action units are obtained after segmentation and background processing, and the color video streams are circularly played in a scene to guide a user to finish actions of the action units. The user feedback comprises two parts of video image feedback and matching degree feedback. And capturing real-time videos of the user by a plurality of Kinects, displaying the videos in a scene, facilitating the user to compare the difference between the motion of the user and the standard motion, and then performing motion adjustment. The action matching degree of the user is displayed in the upper left corner of the scene, and the matching degree comprises, but is not limited to, the comprehensive matching degree of each action unit and the three-dimensional or two-dimensional bone data matching degree of different parts. If the user does not reach a certain matching degree threshold, the user is reminded to repeat the minimum action unit until the user is guided to complete training of the minimum action unit, as shown in fig. 4.
Example five
Embodiments of the present invention also provide a storage medium containing computer-executable instructions, which when executed by a computer processor, are for performing a motion estimation method, the method comprising:
Determining three-dimensional skeleton data corresponding to a depth video stream of a current action unit and two-dimensional skeleton data corresponding to a color video stream of the current action unit;
Based on the principle of maximum balance capability embodiment, determining a balance feature matrix sequence for representing the balance capability of the current action unit according to the three-dimensional bone data and/or the two-dimensional bone data, wherein the balance feature matrix sequence comprises balance feature matrixes at least at two moments;
And determining the matching degree of each balance feature matrix in the balance feature matrix sequence and the corresponding standard balance feature matrix so as to determine the evaluation result of the current action unit.
Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present invention is not limited to the method operations described above, and may also perform the related operations in the motion estimation method provided in any embodiment of the present invention.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FLASH Memory (FLASH), a hard disk, or an optical disk of a computer, where the instructions include a plurality of instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to perform the motion estimation method according to the embodiments of the present invention.
It should be noted that, in the above embodiment of the motion estimation apparatus, each unit and module included are only divided according to the functional logic, but not limited to the above division, so long as the corresponding functions can be implemented; in addition, the specific names of the functional units are also only for distinguishing from each other, and are not used to limit the protection scope of the present invention.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.
Claims (7)
1. A method of motion estimation, comprising:
Determining three-dimensional skeleton data corresponding to a depth video stream of a current action unit and two-dimensional skeleton data corresponding to a color video stream of the current action unit;
Based on the principle of maximum balance capability embodiment, determining a balance feature matrix sequence for representing the balance capability of the current action unit according to the three-dimensional bone data and/or the two-dimensional bone data, wherein the balance feature matrix sequence comprises balance feature matrices at least at two moments and comprises the following steps: matching the balance characteristic matrix sequence of the current action unit with the corresponding standard balance characteristic matrix sequence by adopting a dynamic time warping algorithm to update the balance characteristic matrix sequence, so that the time length of the updated balance characteristic matrix sequence is the same as that of the standard balance characteristic matrix sequence;
determining the matching degree of each balance feature matrix in the balance feature matrix sequence and the corresponding standard balance feature matrix to determine the evaluation result of the current action unit, and determining the quantitative evaluation result of the training content according to the evaluation results of all the action units;
the determining the matching degree between each balance feature matrix in the balance feature matrix sequence and the corresponding standard balance feature matrix to generate the evaluation result of the current action unit includes:
Calculating the product of each balance feature matrix in the balance feature matrix sequence and the corresponding weight matrix to obtain an updated balance feature matrix sequence;
Calculating the product of each standard balance feature matrix in the standard balance feature matrix sequence and the corresponding weight matrix to obtain an updated standard balance feature matrix sequence;
Determining the matching degree of each balance feature matrix in the updated balance feature matrix sequence and the corresponding standard balance feature matrix in the updated standard balance feature matrix sequence so as to determine the evaluation result of the current action unit;
wherein the weight matrix is determined by:
According to the feature value of each balance feature vector in each balance feature matrix, determining a weight value of the feature value of each balance feature vector as a weight value of a corresponding weight vector in a normalization result of the variation amplitude in the two-dimensional standard bone data and/or the three-dimensional standard bone data of the adjacent frames;
Calculating the product of each weight vector and the corresponding weight coefficient to update each weight vector, thereby determining a corresponding weight matrix; wherein, the weight coefficient of the balance characteristic vector of the key part is higher than that of the balance characteristic vector of the general part.
2. The method of claim 1, wherein each action unit corresponds to a set of primary depth video stream and at least two sets of secondary depth video streams, and a set of primary color video stream and at least two sets of secondary color video streams, and wherein determining three-dimensional bone data corresponding to the depth video streams for each action unit, respectively, comprises:
determining three-dimensional bone data corresponding to each action unit according to the main depth video stream and at least two sets of auxiliary depth video streams of each action unit;
Correspondingly, determining the three-dimensional skeleton data corresponding to the color video stream of each action unit comprises the following steps:
and determining at least three groups of two-dimensional bone data corresponding to each action unit according to the main color video stream and at least two sets of auxiliary color video streams of each action unit.
3. The method of claim 1, wherein determining the degree of matching of each of the updated sequence of balanced feature matrices with a corresponding standard balanced feature matrix of the updated sequence of standard balanced feature matrices to determine the evaluation result of the current action unit comprises:
determining the matching degree of the balance feature matrix and the corresponding standard balance feature matrix through the following formula;
Calculating the average value of the matching degree of all the balance feature matrixes to serve as the evaluation result of the current action unit;
Wherein t is the identification of the balance feature matrix, n is the total number of balance feature vectors contained in the updated balance feature matrix, V u',i is the balance feature vector identified as i in the updated balance feature matrix, V s',i is the standard balance feature vector identified as i in the updated standard balance feature matrix, and beta i is the weight coefficient identified as i.
4. A motion estimation apparatus, comprising:
the bone data module is used for determining three-dimensional bone data corresponding to the depth video stream of the current action unit and two-dimensional bone data corresponding to the color video stream of the current action unit;
the feature determining module is configured to determine, based on a principle of maximum balance capability embodiment, a balance feature matrix sequence for representing a balance capability of a current action unit according to the three-dimensional skeleton data and/or the two-dimensional skeleton data, where the balance feature matrix sequence includes balance feature matrices at least at two moments, and includes: matching the balance characteristic matrix sequence of the current action unit with the corresponding standard balance characteristic matrix sequence by adopting a dynamic time warping algorithm to update the balance characteristic matrix sequence, so that the time length of the updated balance characteristic matrix sequence is the same as that of the standard balance characteristic matrix sequence;
The evaluation module is used for determining the matching degree of each balance feature matrix in the balance feature matrix sequence and the corresponding standard balance feature matrix so as to determine the evaluation result of the current action unit; determining a quantitative evaluation result of training content according to the evaluation results of all the action units;
wherein, the evaluation module is specifically configured to:
Calculating the product of each balance feature matrix in the balance feature matrix sequence and the corresponding weight matrix to obtain an updated balance feature matrix sequence;
Calculating the product of each standard balance feature matrix in the standard balance feature matrix sequence and the corresponding weight matrix to obtain an updated standard balance feature matrix sequence;
Determining the matching degree of each balance feature matrix in the updated balance feature matrix sequence and the corresponding standard balance feature matrix in the updated standard balance feature matrix sequence so as to determine the evaluation result of the current action unit;
wherein the weight matrix is determined by:
According to the feature value of each balance feature vector in each balance feature matrix, determining a weight value of the feature value of each balance feature vector as a weight value of a corresponding weight vector in a normalization result of the variation amplitude in the two-dimensional standard bone data and/or the three-dimensional standard bone data of the adjacent frames;
Calculating the product of each weight vector and the corresponding weight coefficient to update each weight vector, thereby determining a corresponding weight matrix; wherein, the weight coefficient of the balance characteristic vector of the key part is higher than that of the balance characteristic vector of the general part.
5. A motion estimation system, the system comprising:
one or more processors;
A storage means for storing one or more programs;
the one or more programs being executable by the one or more processors to cause the one or more processors to implement the motion estimation method of any one of claims 1-4.
6. The system of claim 5, further comprising:
The somatosensory sensor group comprises at least three somatosensory sensors which are arranged on the circumference of the user activity area at equal intervals and are used for acquiring depth video streams and color video streams in the user movement process;
and the output device is used for displaying the color video stream of the standard action and the color video stream of the user movement in real time and outputting the evaluation result of each action unit in real time.
7. A storage medium containing computer executable instructions which, when executed by a computer processor, are for performing the motion estimation method according to any one of claims 1-4.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011173864.5A CN112309540B (en) | 2020-10-28 | 2020-10-28 | Motion evaluation method, device, system and storage medium |
PCT/CN2020/129301 WO2022088290A1 (en) | 2020-10-28 | 2020-11-17 | Motion assessment method, apparatus and system, and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011173864.5A CN112309540B (en) | 2020-10-28 | 2020-10-28 | Motion evaluation method, device, system and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112309540A CN112309540A (en) | 2021-02-02 |
CN112309540B true CN112309540B (en) | 2024-05-14 |
Family
ID=74331527
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011173864.5A Active CN112309540B (en) | 2020-10-28 | 2020-10-28 | Motion evaluation method, device, system and storage medium |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112309540B (en) |
WO (1) | WO2022088290A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113326851B (en) * | 2021-05-21 | 2023-10-27 | 中国科学院深圳先进技术研究院 | Image feature extraction method and device, electronic equipment and storage medium |
CN114627559B (en) * | 2022-05-11 | 2022-08-30 | 深圳前海运动保网络科技有限公司 | Exercise plan planning method, device, equipment and medium based on big data analysis |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20170104322A (en) * | 2016-03-07 | 2017-09-15 | 한국전자통신연구원 | Method for searching similar choreography based on three dimensions and apparatus using the same |
CN108615055A (en) * | 2018-04-19 | 2018-10-02 | 咪咕动漫有限公司 | A kind of similarity calculating method, device and computer readable storage medium |
CN108763560A (en) * | 2018-06-04 | 2018-11-06 | 大连大学 | 3 d human motion search method based on graph model |
Family Cites Families (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103699795B (en) * | 2013-12-20 | 2018-01-23 | 东软熙康健康科技有限公司 | A kind of motor behavior recognition methods, device and exercise intensity monitoring system |
US9857881B2 (en) * | 2015-12-31 | 2018-01-02 | Microsoft Technology Licensing, Llc | Electrical device for hand gestures detection |
CN106228143A (en) * | 2016-08-02 | 2016-12-14 | 王国兴 | A kind of method that instructional video is marked with camera video motion contrast |
US20180053308A1 (en) * | 2016-08-22 | 2018-02-22 | Seiko Epson Corporation | Spatial Alignment of Inertial Measurement Unit Captured Golf Swing and 3D Human Model For Golf Swing Analysis Using IR Reflective Marker |
CN108597578B (en) * | 2018-04-27 | 2021-11-05 | 广东省智能制造研究所 | Human motion assessment method based on two-dimensional skeleton sequence |
CN109740418B (en) * | 2018-11-21 | 2022-10-14 | 中山大学 | Yoga action identification method based on multiple acceleration sensors |
CN109589563B (en) * | 2018-12-29 | 2021-06-22 | 南京华捷艾米软件科技有限公司 | Dance posture teaching and assisting method and system based on 3D motion sensing camera |
CN109833608B (en) * | 2018-12-29 | 2021-06-22 | 南京华捷艾米软件科技有限公司 | Dance action teaching and assisting method and system based on 3D motion sensing camera |
CN109887572A (en) * | 2019-03-21 | 2019-06-14 | 福建中医药大学 | A kind of balance function training method and system |
CN110675936B (en) * | 2019-10-29 | 2021-08-03 | 华中科技大学 | Fitness compensation assessment method and system based on OpenPose and binocular vision |
CN111403039A (en) * | 2020-03-19 | 2020-07-10 | 中国科学院深圳先进技术研究院 | Dynamic balance evaluation method, device, equipment and medium |
-
2020
- 2020-10-28 CN CN202011173864.5A patent/CN112309540B/en active Active
- 2020-11-17 WO PCT/CN2020/129301 patent/WO2022088290A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20170104322A (en) * | 2016-03-07 | 2017-09-15 | 한국전자통신연구원 | Method for searching similar choreography based on three dimensions and apparatus using the same |
CN108615055A (en) * | 2018-04-19 | 2018-10-02 | 咪咕动漫有限公司 | A kind of similarity calculating method, device and computer readable storage medium |
CN108763560A (en) * | 2018-06-04 | 2018-11-06 | 大连大学 | 3 d human motion search method based on graph model |
Non-Patent Citations (1)
Title |
---|
Stillness Moves: Exploring Body Weight-Transfer Learning in Physical Training for Tai-Chi Exercise;Han Hong Lin et al;MMSports’18;21-29 * |
Also Published As
Publication number | Publication date |
---|---|
CN112309540A (en) | 2021-02-02 |
WO2022088290A1 (en) | 2022-05-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109191588B (en) | Motion teaching method, motion teaching device, storage medium and electronic equipment | |
CN105913487B (en) | One kind is based on the matched direction of visual lines computational methods of iris edge analysis in eye image | |
Wu et al. | Futurepose-mixed reality martial arts training using real-time 3d human pose forecasting with a rgb camera | |
CN104598867B (en) | A kind of human action automatic evaluation method and dancing points-scoring system | |
JP7057959B2 (en) | Motion analysis device | |
CN110544301A (en) | Three-dimensional human body action reconstruction system, method and action training system | |
US11945125B2 (en) | Auxiliary photographing device for dyskinesia analysis, and control method and apparatus for auxiliary photographing device for dyskinesia analysis | |
US7404774B1 (en) | Rule based body mechanics calculation | |
CN112309540B (en) | Motion evaluation method, device, system and storage medium | |
CN112288766B (en) | Motion evaluation method, device, system and storage medium | |
WO2017161734A1 (en) | Correction of human body movements via television and motion-sensing accessory and system | |
CN110544302A (en) | Human body action reconstruction system and method based on multi-view vision and action training system | |
CN114022512B (en) | Exercise assisting method, apparatus and medium | |
US20210286983A1 (en) | Estimation method, and computer-readable recording medium recording estimation program | |
CN114120168A (en) | Target running distance measuring and calculating method, system, equipment and storage medium | |
CN117766098B (en) | Body-building optimization training method and system based on virtual reality technology | |
CN109407826A (en) | Ball game analogy method, device, storage medium and electronic equipment | |
CN112633261A (en) | Image detection method, device, equipment and storage medium | |
US10607359B2 (en) | System, method, and apparatus to detect bio-mechanical geometry in a scene using machine vision for the application of a virtual goniometer | |
CN110148202B (en) | Method, apparatus, device and storage medium for generating image | |
KR102510048B1 (en) | Control method of electronic device to output augmented reality data according to the exercise motion | |
Lin et al. | Design of motion capture system in physical education teaching based on machine vision | |
KR102347693B1 (en) | Apparatus, method, computer-readable storage medium and computer program for providing big data based on motion information extracted from video information | |
Chen | 3D Convolutional Neural Networks based Movement Evaluation System for Gymnasts in Computer Vision Applications | |
KR102347692B1 (en) | Apparatus, method, computer-readable storage medium and computer program for providing feedback of posture based on motion information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |