CN113947810A - Taijiquan evaluation method and system based on gesture recognition - Google Patents

Taijiquan evaluation method and system based on gesture recognition Download PDF

Info

Publication number
CN113947810A
CN113947810A CN202111118432.9A CN202111118432A CN113947810A CN 113947810 A CN113947810 A CN 113947810A CN 202111118432 A CN202111118432 A CN 202111118432A CN 113947810 A CN113947810 A CN 113947810A
Authority
CN
China
Prior art keywords
joint
taijiquan
evaluation
joint point
posture
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111118432.9A
Other languages
Chinese (zh)
Inventor
胡建华
曾文英
吴伟美
魏嘉俊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Institute of Science and Technology
Original Assignee
Guangdong Institute of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Institute of Science and Technology filed Critical Guangdong Institute of Science and Technology
Priority to CN202111118432.9A priority Critical patent/CN113947810A/en
Publication of CN113947810A publication Critical patent/CN113947810A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • A63B69/004
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B2071/0647Visualisation of executed movements

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a Taijiquan evaluation method and system based on gesture recognition, wherein the method comprises the following steps: acquiring a Taijiquan video to be evaluated, extracting a posture joint skeleton diagram of a person to be evaluated by a deep learning method, and generating a first joint point sequence diagram; based on a dynamic time adjustment algorithm, distributing evaluation weight values with different weights to different joint points, and comparing the similarity of the first joint point sequence diagram with the standard joint point sequence diagram to obtain an evaluation result; wherein the evaluation weight value of the four-limb joint point is higher than that of the trunk joint point and the head joint point. According to the method, the DTW algorithm and different joint proportions are adopted, the score proportion based on the weight is improved, the scoring accuracy of the Taijiquan is improved, and learners can visually know the mastering degree of the Taijiquan.

Description

Taijiquan evaluation method and system based on gesture recognition
Technical Field
The invention relates to the technical field of gesture recognition, in particular to a Taijiquan evaluation method and system based on gesture recognition.
Background
Taiji boxing is a national-level non-material cultural heritage, is a traditional Chinese boxing of Han nationality, can nourish sexual emotion and build body, and is one of the favorite sports of the middle-aged and the elderly. The Taiji boxing is soft, slow and even, orderly opened and closed, and is light, flexible, round and flexible, but learners are often difficult to master key points of the Taiji boxing only through videos, and the mastery degree of the Taiji boxing by the learners cannot be understood. Therefore, how to evaluate the learning situation of the taijiquan becomes an important issue.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art. Therefore, the Taiji boxing evaluation method based on gesture recognition can evaluate the gesture of the Taiji boxing and is convenient for learners to intuitively know the mastering degree of the Taiji boxing.
The invention also provides a Taiji boxing evaluation system based on gesture recognition, which has the Taiji boxing evaluation method based on gesture recognition.
The invention also provides a computer readable storage medium with the Taijiquan evaluation method based on gesture recognition.
The Taiji punch evaluation method based on gesture recognition according to the embodiment of the first aspect of the invention comprises the following steps: acquiring a Taijiquan video to be evaluated, extracting a posture joint skeleton diagram of a person to be evaluated by a deep learning method, and generating a first joint point sequence diagram; based on a dynamic time adjustment algorithm, distributing evaluation weight values with different weights to different joint points, and comparing the similarity of the first joint point sequence diagram with a standard joint point sequence diagram to obtain an evaluation result; wherein the evaluation weight values for the extremity joint points are higher than the torso joint points and the head joint points.
The Taijiquan evaluation method based on gesture recognition provided by the embodiment of the invention at least has the following beneficial effects: the scoring accuracy of the Taiji boxing is improved by adopting a DTW algorithm and different joint proportions and a score proportion based on weight, and learners can visually know the mastering degree of the Taiji boxing.
According to some embodiments of the invention, a method for extracting a posture joint skeleton map of a person to be evaluated through a deep learning method comprises the following steps: acquiring each frame of image of the Taijiquan video to be evaluated, calculating gradient information of the image, and obtaining pixel classification according to the gradient information, wherein the gradient information comprises: angle, intensity and consistency information; and extracting a posture joint skeleton map of the person to be evaluated by a deep learning method according to the pixel classification to generate the first joint point sequence map.
According to some embodiments of the invention, further comprising: and acquiring a standard Taijiquan video, extracting a posture joint skeleton diagram by the deep learning method, and generating the standard joint point sequence diagram.
According to some embodiments of the present invention, the deep learning network corresponding to the deep learning method includes two branches: the first branch is used for predicting a human body joint point confidence map; a second branch for predicting a human body part affinity field; the first branch and the second branch are both iterative cascade structures.
According to some embodiments of the invention, the extremity joint point comprises: right shoulder, right elbow, right wrist, left shoulder, left elbow, left wrist, right knee, right ankle, left knee, and left ankle; the evaluation weight values for the extremity joint points are each configured to be 9%.
According to some embodiments of the invention, the torso joint points comprise a right hip and a left hip, the evaluation weight values of the torso joint points are each configured to be 2%; the head joint point includes: nose, neck, left eye, right eye, left ear, and right ear; the evaluation weight values for the head-joint points are each configured to be 1%.
A taijiquan evaluation system based on gesture recognition according to an embodiment of the second aspect of the present invention includes: the sequence diagram generation module is used for receiving a Taijiquan video to be evaluated, extracting a posture joint skeleton diagram of the person to be evaluated through a deep learning method and generating a first joint point sequence diagram; the posture evaluation module is used for distributing evaluation weight values with different weights to different joint points based on a dynamic time adjustment algorithm, and comparing the similarity of the first joint point sequence diagram with a standard joint point sequence diagram to obtain an evaluation result; wherein the evaluation weight values for the extremity joint points are higher than the torso joint points and the head joint points.
The Taijiquan evaluation system based on gesture recognition at least has the following beneficial effects: the scoring accuracy of the Taiji boxing is improved by adopting a DTW algorithm and different joint proportions and a score proportion based on weight, and learners can visually know the mastering degree of the Taiji boxing.
A computer-readable storage medium according to an embodiment of the third aspect of the invention has stored thereon a computer program which, when executed by a processor, implements a method according to an embodiment of the first aspect of the invention.
The computer-readable storage medium according to an embodiment of the present invention has at least the same advantageous effects as the method according to an embodiment of the first aspect of the present invention.
Additional aspects and advantages of the invention will be set forth in part in the description which follows and, in part, will be obvious from the description, or may be learned by practice of the invention.
Drawings
The above and/or additional aspects and advantages of the present invention will become apparent and readily appreciated from the following description of the embodiments, taken in conjunction with the accompanying drawings of which:
FIG. 1 is a schematic flow chart of a method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a deep learning network according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating affinity field solution according to an embodiment of the present invention;
FIG. 4 is a schematic representation of two sequences;
FIG. 5 is a graph plotting the corresponding compression relationship between the two sequences of FIG. 4;
FIG. 6 is a sequence trajectory diagram of a Taijiquan gesture of a hand joint point in an embodiment of the present invention;
FIG. 7 is a schematic diagram of a comparison of the sequence trace of FIG. 6 with a corresponding standard action;
FIG. 8 is a schematic view of a human bone marrow model in an embodiment of the present invention;
FIG. 9 is a block diagram of the modules of the system of an embodiment of the present invention.
Reference numerals:
sequence diagram generation module 100 and posture evaluation module 200.
Detailed Description
Reference will now be made in detail to embodiments of the present invention, examples of which are illustrated in the accompanying drawings, wherein like or similar reference numerals refer to the same or similar elements or elements having the same or similar function throughout. The embodiments described below with reference to the accompanying drawings are illustrative only for the purpose of explaining the present invention, and are not to be construed as limiting the present invention.
In the description of the present invention, the meaning of a plurality of means is one or more, the meaning of a plurality of means is two or more, and more than, less than, more than, etc. are understood as excluding the present number, and more than, less than, etc. are understood as including the present number. If the first and second are described for the purpose of distinguishing technical features, they are not to be understood as indicating or implying relative importance or implicitly indicating the number of technical features indicated or implicitly indicating the precedence of the technical features indicated. In the description of the present invention, the step numbers are merely used for convenience of description or for convenience of reference, and the sequence numbers of the steps do not mean the execution sequence, and the execution sequence of the steps should be determined by the functions and the inherent logic, and should not constitute any limitation to the implementation process of the embodiment of the present invention.
Referring to fig. 1, a method of an embodiment of the present invention includes: acquiring a Taijiquan video to be evaluated, extracting a posture joint skeleton diagram of a person to be evaluated by a deep learning method, and generating a first joint point sequence diagram; based on a dynamic time adjustment algorithm, distributing evaluation weight values with different weights to different joint points, and comparing the similarity of the first joint point sequence diagram with the standard joint point sequence diagram to obtain an evaluation result; wherein the evaluation weight value of the four-limb joint point is higher than that of the trunk joint point and the head joint point.
In the embodiment of the invention, each frame of image data of the Taijiquan video to be evaluated is obtained, the gradient information of the image is calculated, and the gradient information of the image, including angle, strength and consistency information, is used as a search index to obtain a pixel classification. In the present embodiment, the angle, intensity, and consistency types are 24, 3, and 3, respectively, so that there are a total of 24 × 3 × 3 — 216 pixel global classification types.
In the embodiment, a standard taijiquan video is acquired, a posture joint skeleton map is extracted by a deep learning method, and a standard joint point sequence map is generated and used as a posture similarity comparison reference. The standard taijiquan video may be obtained by, for example, shooting the punching process of a taijiquan teacher.
Aiming at the Taijiquan video, a posture joint skeleton image is extracted by adopting a deep learning method. The deep learning network structure for human gesture recognition is shown in fig. 2. This network consists of two branches, the confidence map (confidence map) of the second predicted human joint point. The second branch predicts the human Partial Affinity Field (PAF). Each branch is of an iterative cascade structure, the prediction result can be continuously perfected in the subsequent stage, and finally the loss of all stages supervises network training together. For an input picture with the size of omega x h, feature extraction is firstly carried out through a convolution network to generate a feature map F.
The signature graph is then input into the dual branch network. Defining the joint confidence map as S, S ═ S1,S2,...,SJ) It consists of J subgraphs. Sj∈Rω×hJ belongs to (1, 2.. eta., J), and J is the number of human body joint points. Defining the human partial affinity field as L, L ═ L1,L2,...,LC) It consists of C vector maps, each recording the direction of a two-dimensional space of skeletal connections, so Lc∈Rω×h×2C ∈ (1,2,..., C), C is the number of bone connections. In the first stage, the network predicts the joint confidence map S1Human body partial affinity field L1Each represented by the following formula.
S1=ρ1(F) (1)
Figure BDA0003274639740000051
Where ρ is1And
Figure BDA0003274639740000052
representing the forward computation of the first stage of the network. In the subsequent stage, the predicted results of the two branches of the previous stage and the original feature F are concatenated together as input. The formula is as follows:
Figure BDA0003274639740000053
Figure BDA0003274639740000054
each stage uses two L2The loss function iteratively predicts the joint confidence map and the human affinity portion fields. It is worth noting that a spatial pixel weighting mechanism is also introduced into the loss function, and the spatial pixel weighting mechanism has strong robustness for the problem of missing-labeled human body postures in the training set. The loss functions of the two branches in the t stage are as follows:
Figure BDA0003274639740000055
Figure BDA0003274639740000056
wherein
Figure BDA0003274639740000057
Is the annotated joint confidence map,
Figure BDA0003274639740000058
is the annotated human portion affinity field. p is the pixel point and W is the binary mask. When the pixel point p is not labeled, W (p) is 0, so that the condition that the label is punished to be missed and the network is correctly detected can be avoided. Finally, the loss of each stage forms the whole loss function of the network, see the formula:
Figure BDA0003274639740000059
the detection of the bone joint points is performed through the joint point confidence maps. Each joint point confidence map represents the likelihood that a particular body joint will appear at each pixel location in the picture. Ideally, if there is only one person in the picture, there is only one peak in each joint confidence map. If there are k individuals in the picture, there are corresponding k peaks in the joint confidence map for the study joint j. Joint point confidence map S as label*Resulting from the annotated joint location. Firstly, generating a joint point confidence map for each joint point j of each person k in the picture
Figure BDA0003274639740000061
The formula is as follows:
Figure BDA0003274639740000062
wherein xj,k∈R2The pixel position of the jth joint of the kth individual in the picture is labeled, and σ is the standard deviation and represents the course of peak top spread. And finally, aggregating the joint point confidence maps of each person into a multi-person joint point confidence map through a max operator. Which is represented by formula (9):
Figure BDA0003274639740000063
in order to correctly assign the detected skeletal joint points to each person to connect into a skeleton, a human body part affinity field is introduced. The human body part affinity field is a two-dimensional vector map. For each pixel point on the limb connection in the picture, the two-dimensional direction information of the connection of two skeletal joint points is recorded. Each specific type of limb connection corresponds to a body part affinity field. The study object shown in FIG. 3 is limb c, xj1,kAnd xj2,kIs the k < th > personc, and p is any point on the picture. Vector of human affinity field at point p when p is on limb c
Figure BDA0003274639740000064
Is a unit vector v, when p is not on the limb c,
Figure BDA0003274639740000065
the value of (b) is 0. The expression of ν is shown in equation (10):
Figure BDA0003274639740000066
if p is on limb c, formula (11) is satisfied:
0≤ν·(p-xj1,k)≤lc,k&&|ν·(p-xj1,k)|≤σl (11)
wherein lc,k=||xj2,k-xj1,k||2Is the pixel length of the limb c, σlIs the pixel width of limb c. And finally averaging the human body affinity fields of all people in the picture to obtain the labeled human body affinity field. As shown in equation (12):
Figure BDA0003274639740000067
wherein n isc(p) is the number of non-zero vectors at pixel point p for all human body part affinity fields. In conclusion, when gesture recognition is performed, the joint point confidence map and the human body part affinity field are finally obtained from the image through the convolution network. In order to determine whether two joint points of the joint point confidence map can be connected into a limb, the alignment degree of the connection line of the two joint points and the corresponding line segment in the human body part affinity field can be calculated. In particular, d is definedj1And dj2Two bone joint point coordinates detected by the joint point confidence map, the confidence level E of the connected limbs is as follows:
Figure BDA0003274639740000071
p(u)=(1-u)dj1+udj2 (14)
since there may be multiple persons in one frame image, multiple joints j are detected in the joint confidence map1And j2J ∈ {1,..., J }. Defining the detected type of joint as j1The set of joints is Dj1M is the set Dj1Point (2). The detected type of joint is j2The set of joints is Dj2N is a set and Dj2Point (2).
Figure BDA0003274639740000072
Denotes j1M-th point of (1) and j2The connection state of the nth point. Finding the optimal connection of limb c translates into a bipartite graph matching problem, expressed mathematically as:
Figure BDA0003274639740000073
the optimal solution for the sub-problem is solved by equation (15). Finally, the formula (16) obtains the result of estimating the posture of the multi-person human body.
Figure BDA0003274639740000074
From the RGB image, the 2D pose (x, y) coordinates of each joint can be estimated, resulting in human bone detection results.
The DTW algorithm is mainly proposed for sequence matching. When the sequence has a certain drift, the Euclidean distance measurement is invalid. There are now two sequences, respectively: x ═ 2,3,4,7,9,2,1,2,1], Y ═ 1,1,1,1,2,3,3,4,7,8,9,1,1,1,1, 1, and the two sequences are plotted on the coordinate axes as shown in fig. 4.
As can be seen from fig. 4, the euclidean distance of the two sequences is large because the two sequences have drifts on the horizontal axis. This embodiment can solve this problem by using DTW algorithm, by compressing the two sequences at some point in time to minimize the "distance" between the two sequences, where this distance is typically in euclidean distance.
This embodiment actually needs to find the shortest path from X0, Y0 to X N, Y M. For the sequence X, Y given above, the found compression path is:
[(0,0),(0,1),(0,2),(0,3),(0,4),(1,5),(1,6),(2,7),(3,8),(4,9),(4,10),(5,11),(6,11),(6,12),(6,13),(6,14),(7,14),(8,14)]. The corresponding compression relationship is plotted as shown in fig. 5. A globally optimal solution may be derived based on dynamic programming.
Taking a hand joint posture as an example, fig. 6 is a sequence diagram of joint points of a taijiquan posture formed by the hand joint posture.
The lengths of the joint point sequences in the taijiquan posture are probably different for different people, so that the score of a single key point can be obtained by comparing the first joint point sequence diagram with the standard joint point sequence diagram through a DTW algorithm when the similarity of the first joint point sequence diagram and the standard joint point sequence diagram is compared, as shown in FIG. 7.
The invention also adopts the value ratio based on the weight to different joint ratios, and improves the value of the weight of the hand joint and the foot joint during the posture recognition of the Taijiquan. The number of skeleton points in the human skeleton point model is 18, including nose, neck, right shoulder, right elbow, right wrist, left shoulder, left elbow, left wrist, right hip, right knee, right ankle, left hip, left knee, left ankle, left eye, right eye, left ear, and right ear, as shown in fig. 8.
Because of the nature of the motion of taijiquan, the standardization of motion depends mainly on the skeletal joint: the positions of the right shoulder, the right elbow, the right wrist, the left shoulder, the left elbow, the left wrist, the right knee, the right ankle, the left knee and the left ankle are determined, so that the key points are given higher weights in the score calculation process, and the final score can meet the standard evaluation of the Taijiquan action. The calculation result of the score of the taijiquan is shown in equation (17):
Figure BDA0003274639740000081
wherein p represents the final score of the position of the skeleton point, p is more than or equal to 0 and less than or equal to 100, omega represents the distribution weight of the skeleton point, and i represents the position label of the skeleton point. By assigning different weight estimates to different bone points, it is configured to: nose omega 01% of neck omega 11%, right shoulder omega 29% of right elbow omega 39% of the wrist, right wrist omega 49%, left shoulder omega 59%, left elbow ω 69%, left wrist ω 79% of the right hip 82%, right knee ω 99%, right ankle omega 109% of the left hip 112%, left knee ω 129%, left ankle ω 139% left eye ω141%, right eye ω 151%, left ear ω 161%, right ear ω17=1%。
The system of an embodiment of the present invention, referring to fig. 9, includes: the sequence diagram generation module 100 is configured to receive a taijiquan video to be evaluated, extract a posture joint skeleton diagram of the person to be evaluated through a deep learning method, and generate a first joint point sequence diagram; the posture evaluation module 200 is configured to, based on a dynamic time adjustment algorithm, assign evaluation weight values with different weights to different joint points, and compare the similarity of the first joint point sequence diagram with the standard joint point sequence diagram to obtain an evaluation result; wherein the evaluation weight value of the four-limb joint point is higher than that of the trunk joint point and the head joint point.
Although specific embodiments have been described herein, those of ordinary skill in the art will recognize that many other modifications or alternative embodiments are equally within the scope of this disclosure. For example, any of the functions and/or processing capabilities described in connection with a particular device or component may be performed by any other device or component. In addition, while various illustrative implementations and architectures have been described in accordance with embodiments of the present disclosure, those of ordinary skill in the art will recognize that many other modifications of the illustrative implementations and architectures described herein are also within the scope of the present disclosure.
Certain aspects of the present disclosure are described above with reference to block diagrams and flowchart illustrations of systems, methods, systems, and/or computer program products according to example embodiments. It will be understood that one or more blocks of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, respectively, can be implemented by executing computer-executable program instructions. Also, according to some embodiments, some blocks of the block diagrams and flow diagrams may not necessarily be performed in the order shown, or may not necessarily be performed in their entirety. In addition, additional components and/or operations beyond those shown in the block diagrams and flow diagrams may be present in certain embodiments.
Accordingly, blocks of the block diagrams and flowchart illustrations support combinations of means for performing the specified functions, combinations of elements or steps for performing the specified functions and program instruction means for performing the specified functions. It will also be understood that each block of the block diagrams and flowchart illustrations, and combinations of blocks in the block diagrams and flowchart illustrations, can be implemented by special purpose hardware-based computer systems that perform the specified functions, elements or steps, or combinations of special purpose hardware and computer instructions.
Program modules, applications, etc. described herein may include one or more software components, including, for example, software objects, methods, data structures, etc. Each such software component may include computer-executable instructions that, in response to execution, cause at least a portion of the functionality described herein (e.g., one or more operations of the illustrative methods described herein) to be performed.
The software components may be encoded in any of a variety of programming languages. An illustrative programming language may be a low-level programming language, such as assembly language associated with a particular hardware architecture and/or operating system platform. Software components that include assembly language instructions may need to be converted by an assembler program into executable machine code prior to execution by a hardware architecture and/or platform. Another exemplary programming language may be a higher level programming language, which may be portable across a variety of architectures. Software components that include higher level programming languages may need to be converted to an intermediate representation by an interpreter or compiler before execution. Other examples of programming languages include, but are not limited to, a macro language, a shell or command language, a job control language, a scripting language, a database query or search language, or a report writing language. In one or more exemplary embodiments, a software component containing instructions of one of the above programming language examples may be executed directly by an operating system or other software component without first being converted to another form.
The software components may be stored as files or other data storage constructs. Software components of similar types or related functionality may be stored together, such as in a particular directory, folder, or library. Software components may be static (e.g., preset or fixed) or dynamic (e.g., created or modified at execution time).
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the above embodiments, and various changes can be made within the knowledge of those skilled in the art without departing from the gist of the present invention.

Claims (8)

1. A Taijiquan evaluation method based on gesture recognition is characterized by comprising the following steps:
acquiring a Taijiquan video to be evaluated, extracting a posture joint skeleton diagram of a person to be evaluated by a deep learning method, and generating a first joint point sequence diagram;
based on a dynamic time adjustment algorithm, distributing evaluation weight values with different weights to different joint points, and comparing the similarity of the first joint point sequence diagram with a standard joint point sequence diagram to obtain an evaluation result; wherein the evaluation weight values for the extremity joint points are higher than the torso joint points and the head joint points.
2. A posture recognition-based taijiquan evaluation method according to claim 1, wherein the method of extracting a posture joint skeleton map of the person to be evaluated by a deep learning method comprises:
acquiring each frame of image of the Taijiquan video to be evaluated, calculating gradient information of the image, and obtaining pixel classification according to the gradient information, wherein the gradient information comprises: angle, intensity and consistency information;
and extracting a posture joint skeleton map of the person to be evaluated by a deep learning method according to the pixel classification to generate the first joint point sequence map.
3. The method for estimating a taijiquan based on gesture recognition according to claim 1, further comprising: and acquiring a standard Taijiquan video, extracting a posture joint skeleton diagram by the deep learning method, and generating the standard joint point sequence diagram.
4. The posture recognition-based taijiquan evaluation method according to claim 1, wherein the deep learning network corresponding to the deep learning method comprises two branches: the first branch is used for predicting a human body joint point confidence map; a second branch for predicting a human body part affinity field; the first branch and the second branch are both iterative cascade structures.
5. The posture recognition-based taijiquan assessment method of claim 1, wherein the extremity joint points comprise: right shoulder, right elbow, right wrist, left shoulder, left elbow, left wrist, right knee, right ankle, left knee, and left ankle; the evaluation weight values for the extremity joint points are each configured to be 9%.
6. The posture recognition-based taijiquan evaluation method of claim 5, wherein the torso joint points comprise a right hip and a left hip, the evaluation weight values of the torso joint points are each configured to be 2%; the head joint point includes: nose, neck, left eye, right eye, left ear, and right ear; the evaluation weight values for the head-joint points are each configured to be 1%.
7. A posture recognition based taijiquan evaluation system using the method of any of claims 1 to 6, comprising:
the sequence diagram generation module is used for receiving a Taijiquan video to be evaluated, extracting a posture joint skeleton diagram of the person to be evaluated through a deep learning method and generating a first joint point sequence diagram;
the posture evaluation module is used for distributing evaluation weight values with different weights to different joint points based on a dynamic time adjustment algorithm, and comparing the similarity of the first joint point sequence diagram with a standard joint point sequence diagram to obtain an evaluation result; wherein the evaluation weight values for the extremity joint points are higher than the torso joint points and the head joint points.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method of any one of claims 1 to 6.
CN202111118432.9A 2021-09-23 2021-09-23 Taijiquan evaluation method and system based on gesture recognition Pending CN113947810A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111118432.9A CN113947810A (en) 2021-09-23 2021-09-23 Taijiquan evaluation method and system based on gesture recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111118432.9A CN113947810A (en) 2021-09-23 2021-09-23 Taijiquan evaluation method and system based on gesture recognition

Publications (1)

Publication Number Publication Date
CN113947810A true CN113947810A (en) 2022-01-18

Family

ID=79328751

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111118432.9A Pending CN113947810A (en) 2021-09-23 2021-09-23 Taijiquan evaluation method and system based on gesture recognition

Country Status (1)

Country Link
CN (1) CN113947810A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116434345A (en) * 2023-05-09 2023-07-14 北京维艾狄尔信息科技有限公司 Motion matching method, system, terminal and storage medium based on motion sense
CN117173789A (en) * 2023-09-13 2023-12-05 北京师范大学 Solid ball action scoring method, system, equipment and storage medium

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116434345A (en) * 2023-05-09 2023-07-14 北京维艾狄尔信息科技有限公司 Motion matching method, system, terminal and storage medium based on motion sense
CN117173789A (en) * 2023-09-13 2023-12-05 北京师范大学 Solid ball action scoring method, system, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN110021051B (en) Human image generation method based on generation of confrontation network through text guidance
US10949649B2 (en) Real-time tracking of facial features in unconstrained video
JP4372051B2 (en) Hand shape recognition apparatus and method
US20220254157A1 (en) Video 2D Multi-Person Pose Estimation Using Multi-Frame Refinement and Optimization
CN108491754B (en) Dynamic representation and matching human behavior identification method based on bone features
CN109086706A (en) Applied to the action identification method based on segmentation manikin in man-machine collaboration
CN111191599A (en) Gesture recognition method, device, equipment and storage medium
CN113947810A (en) Taijiquan evaluation method and system based on gesture recognition
US9734435B2 (en) Recognition of hand poses by classification using discrete values
WO2020233427A1 (en) Method and apparatus for determining features of target
JP2019096113A (en) Processing device, method and program relating to keypoint data
CN105976395B (en) A kind of video target tracking method based on rarefaction representation
CN111273772B (en) Augmented reality interaction method and device based on slam mapping method
CN113190120B (en) Pose acquisition method and device, electronic equipment and storage medium
CN115083015B (en) 3D human body posture estimation data labeling mode and corresponding model construction method
CN110738650A (en) infectious disease infection identification method, terminal device and storage medium
Kan et al. Self-constrained inference optimization on structural groups for human pose estimation
CN115331259A (en) Three-dimensional human body posture estimation method, system and storage medium
CN115223239A (en) Gesture recognition method and system, computer equipment and readable storage medium
CN109784295B (en) Video stream feature identification method, device, equipment and storage medium
KR101746648B1 (en) Apparatus and method of markerless augmented reality for 3d object representation
Zhong et al. Part-aligned network with background for misaligned person search
CN111611941A (en) Special effect processing method and related equipment
Abdallah et al. An overview of gesture recognition
CN115661254A (en) Multi-person attitude estimation method and related device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination