CN116844084A - Sports motion analysis and correction method and system integrating blockchain - Google Patents

Sports motion analysis and correction method and system integrating blockchain Download PDF

Info

Publication number
CN116844084A
CN116844084A CN202310739669.1A CN202310739669A CN116844084A CN 116844084 A CN116844084 A CN 116844084A CN 202310739669 A CN202310739669 A CN 202310739669A CN 116844084 A CN116844084 A CN 116844084A
Authority
CN
China
Prior art keywords
motion
video
sports
standard
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310739669.1A
Other languages
Chinese (zh)
Inventor
赵志丹
杜宇轩
吴逸南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shantou University
Original Assignee
Shantou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shantou University filed Critical Shantou University
Priority to CN202310739669.1A priority Critical patent/CN116844084A/en
Publication of CN116844084A publication Critical patent/CN116844084A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • G06Q50/20Education
    • G06Q50/205Education administration or guidance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/761Proximity, similarity or dissimilarity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G09EDUCATION; CRYPTOGRAPHY; DISPLAY; ADVERTISING; SEALS
    • G09BEDUCATIONAL OR DEMONSTRATION APPLIANCES; APPLIANCES FOR TEACHING, OR COMMUNICATING WITH, THE BLIND, DEAF OR MUTE; MODELS; PLANETARIA; GLOBES; MAPS; DIAGRAMS
    • G09B5/00Electrically-operated educational appliances
    • G09B5/06Electrically-operated educational appliances with both visual and audible presentation of the material to be studied

Abstract

The embodiment of the invention discloses a sports motion analysis and correction method of a fusion blockchain, which comprises the following steps: and recording user behavior records by using a Fabric block chain network in the system, authorizing a cloud database to distribute corresponding heel training courses to heel training of a user by using an access control mechanism in the Fabric block chain network, finally providing corresponding scores, and using a joint characteristic real-time system model PostEX for extracting a skeleton diagram of a motion video, an action sequence similarity calculation algorithm and an action comparison scoring algorithm. The embodiment of the invention also discloses a system for analyzing and correcting the sports motion of the fusion block chain. The method has good practicability and robustness for motion recognition and scoring, can reduce the calculated amount, and can solve and correct the problem of wrong postures of sports fans when learning related sports.

Description

Sports motion analysis and correction method and system integrating blockchain
Technical Field
The invention relates to image recognition and processing in the field of sports, in particular to a method and a system for analyzing and correcting sports actions based on a Fabric block chain network and joint feature cosine similarity.
Background
With the increase of the level of random economic culture, people gradually increase the consumption consciousness of mental culture, pay more attention to the improvement of physical quality, and start to pursue higher sports.
However, many sports enthusiasts suffer from the fact that no special training 1-to-1 coaching is available, learning resources can only be obtained from the internet and a series of sports actions can be learned, and false actions in the learning process are difficult to find and correct in time.
The field of motion analysis of the human body has been an important field of great interest to students. The task of evaluating the similarity of human motion is an important part of the field of motion analysis of human body. With the rapid development of machine learning and deep learning, scholars propose that a number of algorithms can be applied to the task of evaluating the similarity of human actions.
According to the method, image data of different angles of a target user on a running machine are collected, fusion processing is carried out on the image data of different angles of the target user, a three-dimensional body-building action image of the target user is obtained, a target body-building action standard image matched with the three-dimensional body-building action image of the target user is determined based on a preset body-building action standard image library, then the score of the three-dimensional body-building action image of the target user is obtained according to the key point characteristics of the target body-building action standard image, if the score is lower than the target score, correction information of the three-dimensional body-building action image of the target user is output, so that irregular body-building actions of the user can be corrected in time, and the user does not feel the feeling in the whole process. However, three-dimensional building actions are obtained, which is very computationally expensive, and is not ideal for the practicality and robustness of action recognition and scoring. In addition, the sports in the prior art are not stored by using the blockchain technology, and short boards exist in the aspects of data security and data management.
Disclosure of Invention
The technical problem to be solved by the embodiment of the invention is to provide the sports motion analysis and correction method and system of the fusion block chain, which can solve and correct the problem of the wrong posture of a sports fan during learning related sports by using lower operation expenditure, and save and follow up data by using the block chain technology.
In order to solve the technical problems, the embodiment of the invention provides a sports motion analysis and correction method based on a Fabric block chain network and a joint feature cosine similarity algorithm, which comprises the following steps:
s1: a standard sports video sequence teaching material library is built in a cloud database,
s2: acquiring purchase records of corresponding sports motion videos stored in a Fabric block chain network by a user, authorizing a cloud database by the Fabric block chain network, distributing the corresponding purchased sports teaching videos to the user by the cloud database, shooting the whole training process of the user by using a camera, and storing the shot sports teaching videos as videos of to-be-corrected, wherein after the user finishes training, the videos of to-be-corrected sports motions are transmitted into a system to wait for processing;
s3: preprocessing a motion video sequence to be corrected to enable each frame of the video to be matched with each frame in a corresponding standard motion video sequence in a standard motion video material library one by one;
S4: inputting the preprocessed video sequence, performing real-time calculation by using a joint characteristic real-time system model PostEX, outputting and extracting limb key point coordinates of each frame of the motion video sequence, and outputting in a three-dimensional heat map form when a user sets the motion video sequence to a precise mode;
s5: converting the extracted coordinate value characteristics of the key points of the limbs into limb vector value characteristics by utilizing a characteristic extraction algorithm of cosine values of the body angles of the limbs, and converting the coordinate value characteristics into limb cosine similarity characteristic values; searching the motion with the closest vector value cosine similarity in a motion material video library by using the extracted motion characterization, thereby identifying the specific motion name made by the user and enabling the motion video to be corrected to correspond to the standard motion video;
s6: comparing the motion skeleton diagram to be corrected with the standard motion skeleton diagram by using a scoring and correcting algorithm, providing corresponding scoring and motion correcting suggestions, returning a feedback result of the proportion to a user by using an interactive interface, and providing difference details and detailed motion guiding words;
s7: encrypting the scoring record and storing the scoring record in a Fabric blockchain network in a block mode, and according to a pre-written intelligent contract, if the scoring reaches a certain standard, authorizing related users to acquire corresponding new training videos of the first-period exercise courses by the blockchain network, and continuing training learning.
The method comprises the following steps of:
s21: if the block to be verified in a certain node in the Fabric block chain network is fully written, submitting the verification block to the Fabric block chain network and sending the verification block to all nodes;
s22: after receiving the verification block, each node in the Fabric blockchain network verifies the block, wherein the verification content comprises checking whether the block information accords with rules and standards in the network and verifying whether an initiator has enough authority and qualification to submit the block;
s23: the nodes passing the verification endorse the block to be verified, and in the Fabric block chain network, the nodes reaching at least 50% are set to endorse the block, so that the validity of the block is confirmed;
s24: once the endorsement threshold is reached, the Fabric blockchain network marks the block as validated and formally stores the block in the Fabric blockchain network, while each node updates a copy of the distributed ledger, permanently stored in the Fabric blockchain network.
The camera comprises a monocular camera and an infrared sensor, wherein the monocular camera is used for training and storing videos, and the infrared sensor is used for real-time training and dismantling actions.
The step of preprocessing the motion video sequence to be corrected comprises the following steps: the method of using sliding window obtains video sequence with the same number of frames in video library, converts original RGB frame into gray scale representation, removes background by space enhancement method, and reduces random noise generation by using average filter.
The method for extracting the characteristics of the joint characteristic real-time system model PostEX comprises the following steps:
the method comprises the steps of preprocessing a preprocessed video sequence through the features of a first 10-layer VGG19 network, converting the video sequence into image features F, and dividing the image features F into two branches to respectively predict the confidence degree and the affinity vector of each point, wherein S is a confidence degree network, and L is an affinity vector field network:,/>detecting articulation points through non-maximum value inhibition after the confidence degree of each articulation point is predicted, performing line integration on the affinity vector between the detected articulation points to obtain the affinity between the articulation points, modeling the obtained human body key points and the corresponding affinities at the angle of graph theory, and finally obtaining the final recognition result of the human body skeleton by using a Hungary algorithm.
In the precise mode, normalization and normalization processing are further performed in the preprocessing process, wherein the normalization formula is as follows: . wherein ,/> and />Respectively, the average value of three-dimensional data acquired within 10msAnd variance; />For the original three-dimensional coordinate vector, < > is>Normalizing the processed result for the standard action; the method also comprises the step of standardizing the framework scale and the step of data standard processing, wherein the step of data standard processing comprises the following steps: establishing a human body coordinate system, and obtaining a standard center point P of a bone by taking a coordinate average value of two bone points c (a, b, c), wherein a, b, c are the obtained x, y, z coordinate values of the central point, and the difference between 25 bone points and the central point is taken, so that a human bone coordinate point P with standardized center is obtained s The height D of the human body is obtained through the Euclidean distance, x, y and z of the 25 skeleton point coordinates after center standardization are divided by the height D to obtain skeleton coordinates with standardized dimensions, and different parts of the human body are analyzed through training a classifier.
The S5 further comprises a method for measuring similarity of action sequences based on the extraction of key points of the joint characteristic real-time system, and the method comprises the following steps:
s51: the limb vector features are formed by combining the 25 key points extracted through the joint feature real-time system in pairs
S52: calculating the cosine value of the included angle between every two limb vectors to obtain 276 cosine values of the included angle, forming a vector, and converting the vector into a 276-dimensional cosine similarity vector;
S53: feature extraction is carried out on each of the two groups of photos of the obtained heel training action sequence A and the standard action sequence B, so that two groups of vectors A can be respectively obtained n and Bn, wherein An Vector representation result representing nth picture of action sequence A, B n Vector representation of the nth picture representing action sequence B, and then all A's are processed n Are spliced to obtain a matrix 1, all B n Splicing to obtain a matrix 2;
s54: and removing redundant matrixes by using a method for selecting characteristics of the matrixes.
Wherein, the step S54 further includes the steps of:
s541: extracting all action sequences in the G3D data set by using the G3D data set once matrix, and flattening all the matrixes into one-dimensional feature vectors;
s542: calculating variances of all the features, and completely removing features with variances lower than 0.05;
s543: selecting features using a recursive feature elimination method;
s544: the features selected in S541-S543 are saved and named as the motion recognition feature subset.
Wherein, the step S5 further comprises the steps of:
s55: firstly extracting a matrix 1 and a matrix 2 according to the motion recognition feature subset, changing the matrix 1 and the matrix 2 into a vector 1 and a vector 2 as a result, and then calculating cosine similarity of the vector 1 and the vector 2;
S56: the motion video in the video library is calculated in advance through a cosine similarity algorithm to obtain a corresponding action matrix.
Wherein, the step S6 specifically includes the steps of:
s61: due to A n ={a 1 ,a 2 ,a 3 ,...,a n And B (x) n ={b 1 ,b 2 ,b 3 ,...,b n All are n×276 matrices, and the similarity algorithm of the a matrix and the B matrix is:
wherein ,cosine similarity vector representing the ith frame in matrix a,/v>Cosine similarity vector representing the j-th frame in matrix B,/v>Representing the phase of the ith frame of matrix A and the jth frame of matrix BSimilarity size;
s62: according to the aboveThe specific score is calculated by the following scoring formula:
wherein ,、/>the corresponding Score is calculated as the percentage Score for the mapping parameter converted from similarity to percentage>The similarity between the ith frame of the matrix A and the jth frame of the matrix B is represented.
The invention also provides a system using the method, which comprises the following steps:
the video loading module is used for preloading the purchased standard action video sequence in the cloud database through the related resource authorization of Fabric after the user purchases the corresponding video, so that the user can learn and imitate the standard action video sequence;
the action learning module is used for enabling a user to practice related sports videos, generating an action video sequence to be corrected and uploading the action video sequence;
The feature extraction module is used for preprocessing a motion video sequence to be compared, taking the motion video sequence as input of a joint feature real-time system model PostEX, outputting and extracting a skeleton limb point two-dimensional graph of each frame of the motion video sequence through real-time calculation of the model, and outputting the motion video sequence in a three-dimensional heat graph mode with more flow and calculation force consumption under the condition of setting the motion video sequence as a system accurate mode;
the motion recognition module recognizes specific motion classification by utilizing a feature extraction algorithm of cosine values of the body limb angles, extracts corresponding standard motion videos from a video library, enables the motion videos to be corrected to correspond to the standard motion videos, converts the motion videos into a standard video skeleton limb point two-dimensional map through a joint feature real-time system model PostEX, and can also output the motion videos into a three-dimensional heat map in a precise mode;
the limb correction module calculates the phase difference degree of the sports action and the standard sports action of the athlete by using a skeleton diagram sequence of the sports video to be corrected and the standard sports video through an action comparison scoring algorithm, gives corresponding scores and correction suggestions, the scores are correspondingly recorded in a Fabric blockchain network, and if the scores reach a certain standard, the blockchain network can authorize related users to acquire corresponding new first-period sports course training videos and continue training learning according to the pre-written intelligent contracts.
The embodiment of the invention has the following beneficial effects: aiming at the requirement that the existing sports fan accurately learns sports motions, the invention provides a multi-module and man-machine interaction sports motion analysis and correction system embedded with a series of sports motion recognition and comparison correction algorithms. The scoring record is also correspondingly recorded in the Fabric blockchain network, and if the scoring reaches a certain standard according to the pre-written intelligent contract, the blockchain network can authorize related users to acquire corresponding exercise course follow-up second-period videos and continue follow-up learning.
Drawings
FIG. 1 is a schematic diagram of the overall process of the present invention;
FIG. 2 is a block diagram of the present invention;
FIG. 3 is a flow chart of the architecture of the video loading module of the present invention;
FIG. 4 is a flow chart of the structure of the feature extraction module of the present invention;
FIG. 5 is a flow chart of the structure of the motion recognition module of the present invention;
fig. 6 is a flow chart of the configuration of the limb correction module of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present invention more apparent.
As shown in FIG. 1, in the method for analyzing and correcting sports motion based on Fabric blockchain network and joint feature cosine similarity according to the embodiment of the invention, the main steps include storing behavior records (including course purchase records and motion score records) of users by using blockchain technology, storing all motion standard videos in a cloud database, and authorizing the cloud database to distribute corresponding heel-training courses to users through an access control mechanism in the Fabric blockchain network. A new real-time system model PostEX of joint characteristics is provided for extracting skeleton diagrams of motion videos, a motion sequence similarity calculation algorithm is provided for identifying specific motions and comparing the specific motions with corresponding motions matched with a video library, and finally corresponding scoring and correction feedback suggestions are provided in a motion analysis and correction system through a scoring and correction algorithm. The scoring record is also correspondingly recorded in the Fabric blockchain network, and if the scoring reaches a certain standard according to the pre-written intelligent contract, the blockchain network can authorize related users to acquire corresponding exercise course follow-up second-period videos and continue follow-up learning.
The specific implementation steps are as follows.
S1: the standard action video sequence is preloaded in the cloud database in advance, so that a user can learn in a targeted and selective mode. The video loading module is divided into two parts, namely standard video loading and user video loading. The standard video loading file comprises a plurality of hot sports training items such as shooting, boxing, golf, tennis, bowling, driving and the like. Each category has a plurality of clips for the user to select as standard video for heel-exercise. The user video loading folder is a follow-up video stored by the user in the action learning module and can be loaded into the system summary experience.
S2: the method comprises the steps of obtaining purchase records of corresponding sports motion videos stored in a Fabric block chain network by a user, authorizing a cloud database by the Fabric block chain network, distributing the corresponding purchased sports teaching videos to the user by the cloud database, shooting the whole training process of the user by using a camera, storing the shot sports teaching videos as videos to be corrected, and transmitting the videos to be corrected into a system to wait for processing after the user finishes training.
After the user purchases the corresponding video, the purchased standard action video sequence can be preloaded in the cloud database through the related resource authorization of Fabric, so that the user can learn and imitate the standard action video sequence.
The specific Fabric resource authorization flow is as follows:
firstly, a log channel is set in the Fabric blockchain network, and a distributed account book is set in the log channel for recording user behaviors and storing log records. The distributed ledger is de-centralized, its copies are stored in all user nodes and synchronously updated. The user behavior records are additionally recorded in the Fabric blockchain network and encryption technology is used to ensure that once the records are added to the ledger, the records cannot be modified again.
The user needs to register own identity information at the application program end for the first time and returns a key pair (sk, pk) of the user, and it is noted that only the user knows own private key sk, and other nodes in the Fabric blockchain network can easily obtain the public key pk of the user. When the user needs to log in again later, the key can be directly used for carrying out relevant exercise learning flow operation on the login application program.
After the user enters the application program, the user can purchase the video of the related sports teaching course, the purchased record can be recorded in the local area block, and the block is broadcast to the Fabric block chain network after being written up, and the verification and the formal uplink process are carried out. After the uplink process is completed, the block storing the user behavior record is formally added into the Fabric blockchain network.
An administrator node in the Fabric blockchain network can query relevant user behavior records, sign specified teaching courses according to the behavior records and distribute the teaching courses to corresponding users. The method comprises the steps that an administrator node signs a designated teaching course by using an administrator private key and a user public key, a user terminal decrypts the signature by using the user private key, and therefore a teaching course index is obtained, and relevant teaching courses are downloaded in a cloud database to learn.
The uplink process of the block to be verified mentioned in the Fabric resource authorization flow mainly comprises the following steps:
step 1: if the block to be verified in a certain node in the Fabric block chain network is full, the verification block is submitted to the Fabric block chain network and is sent to all nodes.
Step 2: upon receipt of the validation block, each node in the Fabric blockchain network validates the block, including checking whether the block information meets rules and standards in the network, and verifying that the initiator has sufficient rights and qualifies to commit the block.
Step 3: the nodes passing the verification endorse the block to be verified, namely, approves and verifies the validity and correctness of the block. In a Fabric blockchain network, the validity of the block can be confirmed by setting up to at least 50% of nodes to endorse the block.
Step 4: once the endorsement threshold is reached, the Fabric blockchain network marks the block as "validated" and formally stores the block in the Fabric blockchain network, while each node updates a copy of the distributed ledger, permanently stored in the Fabric blockchain network.
S3: preprocessing the motion video sequence to be corrected to enable each frame of the video to be matched with each frame in the corresponding standard motion video sequence in the standard motion video material library one by one.
When the user needs to compare and correct the training sports, the training video motion sequence A is sampled in the module (the standard sports motion video sequence in the video library is pre-sampled). The sampling flow is as follows:
in order to ensure that the frame number extracted from the follow-up video action sequence is consistent with the frame number of the video sequence in the video library, the sampling method adopted by the invention is a sliding window sampling method. Specifically, x pictures are taken at equal intervals in the training video motion sequence (the standard motion video sequence in the video library also takes x pictures at equal intervals by the advanced sampling sliding window sampling method), so that the video sequence with the same frame number is obtained.
The module mainly uses a joint characteristic real-time system PostEX, namely a human body key point identification network structure. The network structure can extract coordinates of key points including receptors, feet, faces and the like, is suitable for single people and multiple people, can well complete posture estimation tasks such as human body actions, facial expressions, finger movements and the like, and has good robustness. The 25 body parts are identified as key points, and the calculation time is irrelevant to the detected number of people. The input is mainly a group of pictures, a video sequence or a video stream of a network camera. And outputting the original picture group or the video frame and the key point display.
The overall flow of feature extraction is as follows:
firstly, preprocessing an input video, wherein the preprocessing process comprises the following steps: 1) And (3) equidistant sampling is carried out on the video by adopting a sliding window sampling method, so as to obtain the video to be corrected, the video number of which is the same as that of the video library. 2) Because the system aims to capture human body actions and is irrelevant to the background of the video, the interference of the video background is eliminated. 3) The background is removed by adopting a space enhancement method, so that the effect of researching only human actions is achieved. 4) In addition, the process of converting RGB frames into gray frames can generate sharp transition due to random noise, and in order to eliminate the influence, an average filter is used for reducing the generation of the random noise.
Then, the preprocessed video sequence is firstly subjected to feature preprocessing of the first 10 layers of VGG19 networks, then the video sequence is converted into image features F, and the image features F are divided into two branches to respectively predict the confidence coefficient and the affinity vector of the key point of each point. Where S is the confidence network and L is the affinity vector field network:,/>. The nodes are detected by the NMS (non-maximal suppression) later on by the predicted confidence of the respective nodes. And then carrying out line integration on the detected affinity vector between the joint points to obtain the affinity between the joint points. And modeling the obtained human body key points and the corresponding affinities at the angle of graph theory. Therefore, the problem of estimating the posture of the human skeleton is converted into a plurality of bipartite matching problems, and finally, the final recognition result of the human skeleton is obtained by using a Hungary algorithm.
In the accurate mode, the three-dimensional human skeleton diagram is required to be output, and the coordinates of the extracted three-dimensional skeleton diagram are easy to deviate due to the difference of shooting angles and the heights of people, so that the normalization and normalization processing is also carried out in the preprocessing process. The normalization formula is: . wherein ,/> and />Respectively the mean value and the variance of three-dimensional data acquired within 10 ms; />For the original three-dimensional coordinate vector, < > is>Normalizing the processed result for the standard action. Because different people have great differences in height and fat and lean, the skeleton articulation point coordinates of the same action sequence are different, so that the skeleton scale needs to be standardized. Although individuals vary, the proportions of the size and height of the individual parts of the individual vary widely. Therefore, a method for data standardization is proposed by utilizing the ratio of each joint to the height of the human body. The three-dimensional bone data standardization thinking is that a human body coordinate system is established, and a method of taking a coordinate average value by two bone points is adopted to obtain a standard center point P of a bone c (a, b, c), wherein a, b, c are the obtained x, y, z coordinate values of the central point, and the difference between 25 bone points and the central point is taken, so that a human bone coordinate point P with standardized center is obtained s . And obtaining the height D of the human body through the Euclidean distance, and dividing x, y and z of the 25 skeleton point coordinates after center standardization by the height D to obtain the skeleton coordinates with standardized scales.
The real-time computing system utilizes a machine learning approach to position assignment to depth images of the human body, where different parts of the body (e.g., arms, head, torso, hands, etc.) are analyzed by training a classifier. For training the classifier, sampling is carried out from a large motion capture database to generate human body posture depth images with various shapes and sizes, meanwhile, a random decision forest classifier is used, a decision tree is trained by using a plurality of depth images marked with human body positions in advance, the decision tree is continuously optimized in the training process, the decision tree is accurately classified on the depth image of the appointed human body position until the trained decision tree obtains the probability of each pixel on each human body position, and the next step can calculate the region with the highest probability for each human body position. And finally, calculating the positions of the human body joint points predicted by the classifier to form final bone data.
S4: and inputting the preprocessed video sequence, performing real-time calculation by using a joint characteristic real-time system model PostEX, outputting and extracting limb key point coordinates of each frame of the motion video sequence, and outputting the motion video sequence in a three-dimensional heat map form when a user sets the motion video sequence to a precise mode.
Some traditional human body similarity measurement algorithms are simple in design, consume less computing resources and computing time, but are not so good. The data set required for training by using the deep learning method is too large, the consumed computing resources are too much, and the training time is too long. Therefore, the invention uses an action sequence similarity measurement algorithm based on the extraction of the key points of the real-time system of the joint characteristics.
The specific flow of calculating the similarity between the heel training action sequence A and the standard action sequence B by the module is as follows:
step 1: and combining the 25 key points extracted through the joint characteristic real-time system in pairs to form limb vector characteristics. The rule of combination is: if the joint point i (coordinate is [ ],/>) With the joint point j (coordinates (+)>,/>) If there is a limb between them, the limb vector feature is expressed as +.>-/>-/>) If there is no limb between the joint point i and the joint point j, the limb vector feature is expressed as +. >
Step 2: according to the recognition result of the human body motion, 24 limbs can be obtained from each human body skeleton, so that 24 effective non-zero limb vectors can be obtained in total through the formula of the step 1. The cosine value of the included angle is calculated once between every two limb vectors, 276 included angle cosine values can be obtained in total, and then the 276 included angle cosine values are combined into a vector, so that a picture is converted into a 276-dimensional cosine similarity vector after the characteristics of the above single picture are extracted.
Step 3: then the invention performs feature extraction on each of the two groups of photos of the obtained heel training action sequence A and the standard action sequence B, and can respectively obtain two groups of vectors A n and Bn, wherein An Vector representation result representing nth picture of action sequence A, B n The vector representing the nth picture of action sequence B represents the result. The invention then combines all A n Are spliced to obtain a matrix 1, all B n And the matrix 2 is obtained by stitching.
Step 4: after obtaining the matrix, one problem is also considered that the dimension obtained by the matrix obtained after the above steps is 276n. This dimension is very large and it is certain that there are some matrix values that are superfluous during the completion of the action sequence. And the inability to find out and remove those superfluous features is a difficult problem with current skeleton-based human motion recognition. The invention designs a method for selecting the characteristics of the matrix, which comprises the following specific steps:
(1) The invention firstly finds a human body action data set called G3D data set
(2) All action sequences in the G3D data set are extracted once by matrix, and then all the matrices are flattened into one-dimensional feature vectors.
(3) The variance of all features is calculated. Features with variance below 0.05 were all removed. Since this represents that these features do not change at all during all of the sequence of actions, the characterization of these features for the sequence of human actions is superfluous at all and has little reference value.
(4) Features are selected using a recursive feature elimination method after filtering out features with low variance. The specific implementation steps of the recursive feature elimination method are as follows:
(1) The original feature set D is input into a machine learning algorithm according to the whole training set as input, and then the original features are ordered according to certain attributes (such as features_importances_attributes) according to the effect of the machine learning algorithm
(2) Some features which are relatively poor in performance in the machine learning algorithm are removed, and the features which are relatively good in performance in the machine learning algorithm are reserved to form a new feature set D1.
(3) D1 is input again into the machine learning algorithm as an input, and the same operations as (1) and (2) are performed, and then the operations of (1) and (2) are repeated in a reciprocating cycle until the number of selected features reaches the number required to be reduced.
The invention herein uses random forests, lightgbm and xgboost, respectively, for RFE feature selection. Each learner may obtain a subset of features. And then taking the intersection of the three feature subsets as the final feature selection output.
(5) Saving the features selected from (1) to (4), named "action recognition feature subset"
Step 5: firstly, extracting the matrix 1 and the matrix 2 according to the action recognition feature subset, and changing the matrix 1 and the matrix 2 into a vector 1 and a vector 2 as a result. And then according to the cosine similarity of the vector 1 and the vector 2, the calculated result is the similarity of the two action sequences. The higher the similarity value, the more standard the heel-and-toe motion.
Step 6: the motion video in the video library is calculated in advance through a cosine similarity algorithm to obtain a corresponding action matrix. After the heel-and-exercise action matrix is calculated, the relevant standard motion matrix with the highest similarity is searched in the video library only by the method of the step 5, so that the heel-and-exercise action video is matched with the standard motion video, and the concrete name of the heel-and-exercise action is obtained.
S5: converting the extracted coordinate value characteristics of the key points of the limbs into limb vector value characteristics by utilizing a characteristic extraction algorithm of cosine values of the body angles of the limbs, and converting the coordinate value characteristics into limb cosine similarity characteristic values; and searching the motion with the closest vector value cosine similarity in the motion material video library by the extracted motion characterization, thereby identifying the specific motion name made by the user and enabling the motion video to be corrected to correspond to the standard motion video.
With motion matrix A to be corrected in motion recognition module n And standard motion matrix B n As input, the difference between the sport motion of the athlete and the standard sport motion is calculated by a motion comparison scoring algorithm, and corresponding scoring and correction suggestions are given. The specific algorithm flow is as follows.
Step 1: due to A n ={a 1 ,a 2 ,a 3 ,...,a n }(a i Cosine similarity vector for the i-th frame) and B n ={b 1 ,b 2 ,b 3 ,...,b n }(b i The cosine similarity vector of the i-th frame is a matrix (consisting of n cosine similarity vectors) of n×276, so the similarity algorithm of the a matrix and the B matrix is:
wherein ,cosine similarity vector representing the ith frame in matrix a,/v>Cosine similarity vector representing the j-th frame in matrix B,/v>The similarity between the ith frame of the matrix A and the jth frame of the matrix B is represented.
Step 2: according to the aboveThe specific score is calculated by the following scoring formula:
wherein ,、/>the corresponding Score is calculated as the percentage Score for the mapping parameter converted from similarity to percentage>The similarity between the ith frame of the matrix A and the jth frame of the matrix B is represented.
The user may view the video sequence similarity score histogram to observe the motion score of his limb. And then, checking the similarity of the human body gestures of the learning video image frames, observing the similarity of each image frame, and calculating the result as a numerical value of the percentile. Finally, checking the similarity of each part of the human body in the video, and generating a curve of the calculation result of the similarity of the limbs in the whole time sequence by selecting each limb part, so as to provide the time information required to be trained and corrected in a finer manner, thereby correcting the action in a targeted manner.
The scoring record of each time is correspondingly encrypted into hash data and then stored in a Fabric blockchain network in the form of blocks, and according to the pre-written intelligent contracts, if the scoring reaches a certain standard, an administrator node in the blockchain network can authorize related users to acquire corresponding exercise course training secondary videos and continue training learning.
S6: and comparing the to-be-corrected motion skeleton diagram with the standard motion skeleton diagram by using a scoring and correcting algorithm, providing corresponding scoring and motion correcting suggestions, returning a feedback result of the proportion to a user by using an interactive interface, and providing difference details and detailed motion guiding words.
In order to facilitate the user to realize various functions by using the function interface, the system home page displays the use description text of some operation guidance for the user. In the function interface, the functions designed and realized by the system are guided to be used by a user in a button, text and other modes, and the effects of various functions are realized. The feedback prompt is that after the user carries out corresponding operation according to the system prompt, the system provides a feedback result corresponding to the operation for the user to check.
S7: encrypting the scoring record and storing the scoring record in a Fabric blockchain network in a block mode, and according to a pre-written intelligent contract, if the scoring reaches a certain standard, authorizing related users to acquire corresponding new training videos of the first-period exercise courses by the blockchain network, and continuing training learning.
The embodiment of the invention also provides a system for analyzing and correcting the sports motion based on the Fabric block chain network and the joint feature cosine similarity algorithm by using the method, as shown in fig. 2, comprising:
1. video loading module:
as shown in fig. 3, the standard action video sequence is preloaded in the cloud database in advance for the user to learn in a targeted and selective manner. The video loading module is divided into two parts, namely standard video loading and user video loading. The standard video loading file comprises a plurality of hot sports training items such as shooting, boxing, golf, tennis, bowling, driving and the like. Each category has a plurality of clips for the user to select as standard video for heel-exercise. The user video loading folder is a follow-up video stored by the user in the action learning module and can be loaded into the system summary experience. After the user purchases the corresponding video, the purchased standard action video sequence can be preloaded in the cloud database through the related resource authorization of Fabric, so that the user can learn and imitate the standard action video sequence.
The specific Fabric resource authorization flow is as follows:
firstly, a log channel is set in the Fabric blockchain network, and a distributed account book is set in the log channel for recording user behaviors and storing log records. The distributed ledger is de-centralized, its copies are stored in all user nodes and synchronously updated. The user behavior records are additionally recorded in the Fabric blockchain network and encryption technology is used to ensure that once the records are added to the ledger, the records cannot be modified again.
The user needs to register own identity information at the application program end for the first time and returns a key pair (sk, pk) of the user, and it is noted that only the user knows own private key sk, and other nodes in the Fabric blockchain network can easily obtain the public key pk of the user. When the user needs to log in again later, the key can be directly used for carrying out relevant exercise learning flow operation on the login application program.
After the user enters the application program, the user can purchase the video of the related sports teaching course, the purchased record can be recorded in the local area block, and the block is broadcast to the Fabric block chain network after being written up, and the verification and the formal uplink process are carried out. After the uplink process is completed, the block storing the user behavior record is formally added into the Fabric blockchain network.
An administrator node in the Fabric blockchain network can query relevant user behavior records, sign specified teaching courses according to the behavior records and distribute the teaching courses to corresponding users. The method comprises the steps that an administrator node signs a designated teaching course by using an administrator private key and a user public key, a user terminal decrypts the signature by using the user private key, and therefore a teaching course index is obtained, and relevant teaching courses are downloaded in a cloud database to learn.
The uplink process of the block to be verified mentioned in the Fabric resource authorization flow mainly comprises the following steps:
step 1: if the block to be verified in a certain node in the Fabric block chain network is full, the verification block is submitted to the Fabric block chain network and is sent to all nodes.
Step 2: upon receipt of the validation block, each node in the Fabric blockchain network validates the block, including checking whether the block information meets rules and standards in the network, and verifying that the initiator has sufficient rights and qualifies to commit the block.
Step 3: the nodes passing the verification endorse the block to be verified, namely, approves and verifies the validity and correctness of the block. In a Fabric blockchain network, the validity of the block can be confirmed by setting up to at least 50% of nodes to endorse the block.
Step 4: once the endorsement threshold is reached, the Fabric blockchain network marks the block as "validated" and formally stores the block in the Fabric blockchain network, while each node updates a copy of the distributed ledger, permanently stored in the Fabric blockchain network.
2. Action learning module:
the module is used for a user to watch the standard video imported by the video loading module and keep track of the actions of the standard video. When the user starts to learn, the user can select to learn the video key actions, and after all actions are learned, the user can exercise the complete video. An infrared sensor is selected to conduct real-time training and dismantling actions, and the actions are corrected in time according to feedback results. After the disassembling action study is completed, a monocular camera is selected for heel training and video is stored, so that the gesture sensing module can conveniently extract skeleton key points. And generating a heel-exercise action video sequence after the heel-exercise is completed and uploading the sequence.
3. And the feature extraction module is used for:
as shown in fig. 4, when the user needs to compare and correct the motion of the user, the motion sequence a is sampled in the module (the standard motion video sequence in the video library is pre-sampled). The sampling flow is as follows:
in order to ensure that the frame number extracted from the follow-up video action sequence is consistent with the frame number of the video sequence in the video library, the sampling method adopted by the invention is a sliding window sampling method. Specifically, x pictures are taken at equal intervals in the training video motion sequence (the standard motion video sequence in the video library also takes x pictures at equal intervals by the advanced sampling sliding window sampling method), so that the video sequence with the same frame number is obtained.
The module mainly uses a joint characteristic real-time system PostEX, namely a human body key point identification network structure. The network structure can extract coordinates of key points including receptors, feet, faces and the like, is suitable for single people and multiple people, can well complete posture estimation tasks such as human body actions, facial expressions, finger movements and the like, and has good robustness. The 25 body parts are identified as key points, and the calculation time is irrelevant to the detected number of people. The input is mainly a group of pictures, a video sequence or a video stream of a network camera. And outputting the original picture group or the video frame and the key point display.
The overall flow of feature extraction is as follows:
firstly, preprocessing an input video, wherein the preprocessing process comprises the following steps: 1) And (3) equidistant sampling is carried out on the video by adopting a sliding window sampling method, so as to obtain the video to be corrected, the video number of which is the same as that of the video library. 2) Because the system aims to capture human body actions and is irrelevant to the background of the video, the interference of the video background is eliminated. 3) The background is removed by adopting a space enhancement method, so that the effect of researching only human actions is achieved. 4) In addition, the process of converting RGB frames into gray frames can generate sharp transition due to random noise, and in order to eliminate the influence, an average filter is used for reducing the generation of the random noise.
Then, the preprocessed video sequence is firstly subjected to feature preprocessing of the first 10 layers of VGG19 networks, then the video sequence is converted into image features F, and the image features F are divided into two branches to respectively predict the confidence coefficient and the affinity vector of the key point of each point. Where S is the confidence network and L is the affinity vector field network:,/>. The nodes are detected by the NMS (non-maximal suppression) later on by the predicted confidence of the respective nodes. Then handle is checkedAnd carrying out line integration on the affinity vector between the measured joint points to obtain the affinity between the joint points. And modeling the obtained human body key points and the corresponding affinities at the angle of graph theory. Therefore, the problem of estimating the posture of the human skeleton is converted into a plurality of bipartite matching problems, and finally, the final recognition result of the human skeleton is obtained by using a Hungary algorithm.
In the accurate mode, the three-dimensional human skeleton diagram is required to be output, and the coordinates of the extracted three-dimensional skeleton diagram are easy to deviate due to the difference of shooting angles and the heights of people, so that the normalization and normalization processing is also carried out in the preprocessing process. The normalization formula is:. wherein ,/> and />Respectively the mean value and the variance of three-dimensional data acquired within 10 ms; />For the original three-dimensional coordinate vector, < > is>Normalizing the processed result for the standard action. Because different people have great differences in height and fat and lean, the skeleton articulation point coordinates of the same action sequence are different, so that the skeleton scale needs to be standardized. Although individuals vary, the proportions of the size and height of the individual parts of the individual vary widely. Therefore, a method for data standardization is proposed by utilizing the ratio of each joint to the height of the human body. The three-dimensional bone data standardization thinking is that a human body coordinate system is established, and a method of taking a coordinate average value by two bone points is adopted to obtain a standard center point P of a bone c (a, b, c), wherein a, b, c are the obtained x, y, z coordinate values of the central point, and the difference between 25 bone points and the central point is taken, so that a human bone coordinate point P with standardized center is obtained s . And obtaining the height D of the human body through the Euclidean distance, and dividing x, y and z of the 25 skeleton point coordinates after center standardization by the height D to obtain the skeleton coordinates with standardized scales.
The real-time computing system utilizes a machine learning approach to position assignment to depth images of the human body, where different parts of the body (e.g., arms, head, torso, hands, etc.) are analyzed by training a classifier. For training the classifier, sampling is carried out from a large motion capture database to generate human body posture depth images with various shapes and sizes, meanwhile, a random decision forest classifier is used, a decision tree is trained by using a plurality of depth images marked with human body positions in advance, the decision tree is continuously optimized in the training process, the decision tree is accurately classified on the depth image of the appointed human body position until the trained decision tree obtains the probability of each pixel on each human body position, and the next step can calculate the region with the highest probability for each human body position. And finally, calculating the positions of the human body joint points predicted by the classifier to form final bone data.
4. The action recognition module:
as shown in fig. 5, some conventional human similarity measurement algorithms have simple design and consume less computing resources and computing time, but the effect is not so good. The data set required for training by using the deep learning method is too large, the consumed computing resources are too much, and the training time is too long. Therefore, the invention designs an action sequence similarity measurement algorithm based on the extraction of the key points of the joint characteristic real-time system.
The specific flow of calculating the similarity between the heel training action sequence A and the standard action sequence B by the module is as follows:
step 1: and combining the 25 key points extracted through the joint characteristic real-time system in pairs to form limb vector characteristics. The rule of combination is: if the joint point i (coordinate is [ ],/>) With the joint point j (coordinates (+)>,/>) If there is a limb between them, the limb vector feature is expressed as +.>-/>-/>) If there is no limb between the joint point i and the joint point j, the limb vector feature is expressed as +.>
Step 2: according to the recognition result of the human body motion, 24 limbs can be obtained from each human body skeleton, so that 24 effective non-zero limb vectors can be obtained in total through the formula of the step 1. The cosine value of the included angle is calculated once between every two limb vectors, 276 included angle cosine values can be obtained in total, and then the 276 included angle cosine values are combined into a vector, so that a picture is converted into a 276-dimensional cosine similarity vector after the characteristics of the above single picture are extracted.
Step 3: then the invention performs feature extraction on each of the two groups of photos of the obtained heel training action sequence A and the standard action sequence B, and can respectively obtain two groups of vectors A n and Bn, wherein An Vector representation result representing nth picture of action sequence A, B n The vector representing the nth picture of action sequence B represents the result. The invention then combines all A n Are spliced to obtain a matrix 1, all B n And the matrix 2 is obtained by stitching.
Step 4: after obtaining the matrix, one problem is also considered that the dimension obtained by the matrix obtained after the above steps is 276n. This dimension is very large and it is certain that there are some matrix values that are superfluous during the completion of the action sequence. And the inability to find out and remove those superfluous features is a difficult problem with current skeleton-based human motion recognition. The invention designs a method for selecting the characteristics of the matrix, which comprises the following specific steps:
(1) the invention firstly finds a human body action data set called G3D data set
(2) All action sequences in the G3D data set are extracted once by matrix, and then all the matrices are flattened into one-dimensional feature vectors.
(3) The variance of all features is calculated. Features with variance below 0.05 were all removed. Since this represents that these features do not change at all during all of the sequence of actions, the characterization of these features for the sequence of human actions is superfluous at all and has little reference value.
(4) Features are selected using a recursive feature elimination method after filtering out features with low variance. The specific implementation steps of the recursive feature elimination method are as follows:
(1) The original feature set D is input into a machine learning algorithm according to the whole training set as input, and then the original features are ordered according to certain attributes (such as features_importances_attributes) according to the effect of the machine learning algorithm
(2) Some features which are relatively poor in performance in the machine learning algorithm are removed, and the features which are relatively good in performance in the machine learning algorithm are reserved to form a new feature set D1.
(3) D1 is input again into the machine learning algorithm as an input, and the same operations as (1) and (2) are performed, and then the operations of (1) and (2) are repeated in a reciprocating cycle until the number of selected features reaches the number required to be reduced.
The invention herein uses random forests, lightgbm and xgboost, respectively, for RFE feature selection. Each learner may obtain a subset of features. And then taking the intersection of the three feature subsets as the final feature selection output.
(5) Saving the features selected from (1) to (4), named "action recognition feature subset"
Step 5: firstly, extracting the matrix 1 and the matrix 2 according to the action recognition feature subset, and changing the matrix 1 and the matrix 2 into a vector 1 and a vector 2 as a result. And then according to the cosine similarity of the vector 1 and the vector 2, the calculated result is the similarity of the two action sequences. The higher the similarity value, the more standard the heel-and-toe motion.
Step 6: the motion video in the video library is calculated in advance through a cosine similarity algorithm to obtain a corresponding action matrix. After the heel-and-exercise action matrix is calculated, the relevant standard motion matrix with the highest similarity is searched in the video library only by the method of the step 5, so that the heel-and-exercise action video is matched with the standard motion video, and the concrete name of the heel-and-exercise action is obtained.
Fifth step: limb correction module:
as shown in fig. 6, the motion matrix A to be corrected in the motion recognition module n And standard motion matrix B n As input, the difference between the sport motion of the athlete and the standard sport motion is calculated by a motion comparison scoring algorithm, and corresponding scoring and correction suggestions are given. The specific algorithm flow is as follows.
Step 1: due to A n ={a 1 ,a 2 ,a 3 ,...,a n }(a i Cosine similarity vector for the i-th frame) and B n ={b 1 ,b 2 ,b 3 ,...,b n }(b i The cosine similarity vector of the i-th frame is a matrix (consisting of n cosine similarity vectors) of n×276, so the similarity algorithm of the a matrix and the B matrix is:
wherein ,cosine similarity vector representing the ith frame in matrix a,/v>Cosine similarity vector representing the j-th frame in matrix B,/v>The similarity between the ith frame of the matrix A and the jth frame of the matrix B is represented.
Step 2: according to the aboveThe specific score is calculated by the following scoring formula:
wherein ,、/>the corresponding Score is calculated as the percentage Score for the mapping parameter converted from similarity to percentage>The similarity between the ith frame of the matrix A and the jth frame of the matrix B is represented.
The user may view the video sequence similarity score histogram to observe the motion score of his limb. And then, checking the similarity of the human body gestures of the learning video image frames, observing the similarity of each image frame, and calculating the result as a numerical value of the percentile. Finally, checking the similarity of each part of the human body in the video, and generating a curve of the calculation result of the similarity of the limbs in the whole time sequence by selecting each limb part, so as to provide the time information required to be trained and corrected in a finer manner, thereby correcting the action in a targeted manner.
The scoring record of each time is correspondingly encrypted into hash data and then stored in a Fabric blockchain network in the form of blocks, and according to the pre-written intelligent contracts, if the scoring reaches a certain standard, an administrator node in the blockchain network can authorize related users to acquire corresponding exercise course training secondary videos and continue training learning.
The module also includes a functional interface and feedback cues. In order to facilitate the user to realize various functions by using the function interface, the system home page displays the use description text of some operation guidance for the user. In the function interface, the functions designed and realized by the system are guided to be used by a user in a button, text and other modes, and the effects of various functions are realized. The feedback prompt is that after the user carries out corresponding operation according to the system prompt, the system provides a feedback result corresponding to the operation for the user to check.
The above disclosure is only a preferred embodiment of the present invention, and it is needless to say that the scope of the invention is not limited thereto, and therefore, the equivalent changes according to the claims of the present invention still fall within the scope of the present invention.

Claims (10)

1. A physical exercise action analysis and correction method based on a Fabric block chain network and a joint feature cosine similarity algorithm is characterized by comprising the following steps:
S1: constructing a standard sports video sequence teaching material library in a cloud database;
s2: acquiring purchase records of corresponding sports motion videos stored in a Fabric block chain network by a user, authorizing a cloud database by the Fabric block chain network, distributing the corresponding purchased sports teaching videos to the user by the cloud database, shooting the whole training process of the user by using a camera, and storing the shot sports teaching videos as videos of to-be-corrected, wherein after the user finishes training, the videos of to-be-corrected sports motions are transmitted into a system to wait for processing;
s3: preprocessing a motion video sequence to be corrected to enable each frame of the video to be matched with each frame in a corresponding standard motion video sequence in a standard motion video material library one by one;
s4: inputting the preprocessed video sequence, performing real-time calculation by using a joint characteristic real-time system model PostEX, outputting and extracting limb key point coordinates of each frame of the motion video sequence, and outputting in a three-dimensional heat map form when a user sets the motion video sequence to a precise mode;
s5: converting the extracted coordinate value characteristics of the key points of the limbs into limb vector value characteristics by utilizing a characteristic extraction algorithm of cosine values of the body angles of the limbs, and converting the coordinate value characteristics into limb cosine similarity characteristic values; searching the motion with the closest vector value cosine similarity in a motion material video library by using the extracted motion characterization, thereby identifying the specific motion name made by the user and enabling the motion video to be corrected to correspond to the standard motion video;
S6: comparing the motion skeleton diagram to be corrected with the standard motion skeleton diagram by using a scoring and correcting algorithm, providing corresponding scoring and motion correcting suggestions, returning a feedback result of the proportion to a user by using an interactive interface, and providing difference details and detailed motion guiding words;
s7: encrypting the scoring record and storing the scoring record in a Fabric blockchain network in a block mode, and according to a pre-written intelligent contract, if the scoring reaches a certain standard, authorizing related users to acquire corresponding new training videos of the first-period exercise courses by the blockchain network, and continuing training learning.
2. The method for analyzing and correcting sports motion based on Fabric blockchain network and joint feature cosine similarity algorithm according to claim 1, wherein the step of uploading the Fabric blockchain network comprises:
s21: if the block to be verified in a certain node in the Fabric block chain network is fully written, submitting the verification block to the Fabric block chain network and sending the verification block to all nodes;
s22: after receiving the verification block, each node in the Fabric blockchain network verifies the block, wherein the verification content comprises checking whether the block information accords with rules and standards in the network and verifying whether an initiator has enough authority and qualification to submit the block;
S23: the nodes passing the verification endorse the block to be verified, and in the Fabric block chain network, the nodes reaching at least 50% are set to endorse the block, so that the validity of the block is confirmed;
s24: once the endorsement threshold is reached, the Fabric blockchain network marks the block as validated and formally stores the block in the Fabric blockchain network, while each node updates a copy of the distributed ledger, permanently stored in the Fabric blockchain network.
3. The method for analyzing and correcting the sports motion based on the Fabric blockchain network and the joint feature cosine similarity algorithm according to claim 1, wherein the camera comprises a monocular camera and an infrared sensor, the monocular camera is used for heel-in and storing videos, and the infrared sensor is used for real-time heel-in and detaching motions.
4. The step of preprocessing the motion video sequence to be corrected comprises the following steps: obtaining a video sequence with the same number of frames as that in a video library by using a sliding window method, converting the original RGB frames into gray scale representation, removing the background by using a space enhancement method, and reducing the generation of random noise by using an average filter; the method for extracting the characteristics of the joint characteristic real-time system model PostEX comprises the following steps:
The method comprises the steps of preprocessing a preprocessed video sequence through the features of a first 10-layer VGG19 network, converting the video sequence into image features F, and dividing the image features F into two branches to respectively predict the confidence degree and the affinity vector of each point, wherein S is a confidence degree network, and L is an affinity vector field network:,/>detecting the joint point by non-maximum value inhibition after the predicted confidence coefficient of each joint point, and then detectingAnd carrying out line integration on the affinity vectors among the obtained joint points to obtain the affinity among the joint points, modeling the obtained human body key points and the corresponding affinities according to the angle of graph theory, and finally obtaining the final recognition result of the human body skeleton by using a Hungary algorithm.
5. The method for analyzing and correcting sports motion based on a Fabric blockchain network and a joint feature cosine similarity algorithm according to claim 4, wherein in the accurate mode, normalization and normalization processing are further performed in the preprocessing process, and the normalization formula is as follows: . wherein ,/> and />Respectively the mean value and the variance of three-dimensional data acquired within 10 ms; />For the original three-dimensional coordinate vector, < > is>Normalizing the processed result for the standard action; the method also comprises the step of standardizing the framework scale and the step of data standard processing, wherein the step of data standard processing comprises the following steps: establishing a human body coordinate system, and obtaining a standard center point P of a bone by taking a coordinate average value of two bone points c (a, b, c), wherein a, b, c are the obtained x, y, z coordinate values of the central point, and the difference between 25 bone points and the central point is taken, so that a human bone coordinate point P with standardized center is obtained s The height D of the human body is obtained through Euclidean distance, x of 25 skeleton point coordinates after center standardization,and dividing y and z by the height D to obtain bone coordinates with standardized dimensions, and analyzing different parts of the body by training a classifier.
6. The method for analyzing and correcting sports motion based on a Fabric blockchain network and a joint feature cosine similarity algorithm according to claim 5, wherein S5 further comprises a method for measuring similarity of motion sequences based on extraction of key points of a real-time system of joint features, comprising the following steps:
s51: combining the 25 key points extracted through the joint characteristic real-time system in pairs to form limb vector characteristics;
s52: calculating the cosine value of the included angle between every two limb vectors to obtain 276 cosine values of the included angle, forming a vector, and converting the vector into a 276-dimensional cosine similarity vector;
s53: feature extraction is carried out on each of the two groups of photos of the obtained heel training action sequence A and the standard action sequence B, so that two groups of vectors A can be respectively obtained n and Bn, wherein An Vector representation result representing nth picture of action sequence A, B n Vector representation of the nth picture representing action sequence B, and then all A's are processed n Are spliced to obtain a matrix 1, all B n Splicing to obtain a matrix 2;
s54: and removing redundant matrixes by using a method for selecting characteristics of the matrixes.
7. The method for analyzing and correcting sports motion based on the Fabric blockchain network and the joint feature cosine similarity algorithm as set forth in claim 6, wherein the step S54 further includes the steps of:
s541: extracting all action sequences in the G3D data set by using the G3D data set once matrix, and flattening all the matrixes into one-dimensional feature vectors;
s542: calculating variances of all the features, and completely removing features with variances lower than 0.05;
s543: selecting features using a recursive feature elimination method;
s544: the features selected in S541-S543 are saved and named as the motion recognition feature subset.
8. The method for analyzing and correcting sports motion based on the Fabric blockchain network and the joint feature cosine similarity algorithm according to claim 7, wherein the step S5 further comprises the steps of:
S55: firstly extracting a matrix 1 and a matrix 2 according to the motion recognition feature subset, changing the matrix 1 and the matrix 2 into a vector 1 and a vector 2 as a result, and then calculating cosine similarity of the vector 1 and the vector 2;
s56: the motion video in the video library is calculated in advance through a cosine similarity algorithm to obtain a corresponding action matrix.
9. The method for analyzing and correcting the sports motion based on the Fabric blockchain network and the joint feature cosine similarity algorithm according to claim 8, wherein the step S6 specifically comprises the steps of:
s61: due to A n ={a 1 ,a 2 ,a 3 ,...,a n And B (x) n ={b 1 ,b 2 ,b 3 ,...,b n All are n×276 matrices, and the similarity algorithm of the a matrix and the B matrix is:
wherein ,cosine similarity vector representing the ith frame in matrix a,/v>Cosine similarity vector representing the j-th frame in matrix B,/v>Representing the similarity between the ith frame of the matrix A and the jth frame of the matrix B;
s62: according to the aboveThe specific score is calculated by the following scoring formula:
wherein ,、/>the corresponding Score is calculated as the percentage Score for the mapping parameter converted from similarity to percentage>The similarity between the ith frame of the matrix A and the jth frame of the matrix B is represented.
10. A system for running the Fabric blockchain network and joint feature cosine similarity algorithm based athletic movement analysis and correction method of any of claims 1-9, comprising:
The video loading module is used for preloading the purchased standard action video sequence in the cloud database through the related resource authorization of Fabric after the user purchases the corresponding video, so that the user can learn and imitate the standard action video sequence;
the action learning module is used for enabling a user to practice related sports videos, generating an action video sequence to be corrected and uploading the action video sequence;
the feature extraction module is used for preprocessing a motion video sequence to be compared, taking the motion video sequence as input of a joint feature real-time system model PostEX, outputting and extracting a skeleton limb point two-dimensional graph of each frame of the motion video sequence through real-time calculation of the model, and outputting the motion video sequence in a three-dimensional heat graph mode with more flow and calculation force consumption under the condition of setting the motion video sequence as a system accurate mode;
the motion recognition module recognizes specific motion classification by utilizing a feature extraction algorithm of cosine values of the body limb angles, extracts corresponding standard motion videos from a video library, enables the motion videos to be corrected to correspond to the standard motion videos, converts the motion videos into a standard video skeleton limb point two-dimensional map through a joint feature real-time system model PostEX, and can also output the motion videos into a three-dimensional heat map in a precise mode;
The limb correction module calculates the phase difference degree of the sports action and the standard sports action of the athlete by using a skeleton diagram sequence of the sports video to be corrected and the standard sports video through an action comparison scoring algorithm, gives corresponding scores and correction suggestions, the scores are correspondingly recorded in a Fabric blockchain network, and if the scores reach a certain standard, the blockchain network can authorize related users to acquire corresponding new first-period sports course training videos and continue training learning according to the pre-written intelligent contracts.
CN202310739669.1A 2023-06-21 2023-06-21 Sports motion analysis and correction method and system integrating blockchain Pending CN116844084A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310739669.1A CN116844084A (en) 2023-06-21 2023-06-21 Sports motion analysis and correction method and system integrating blockchain

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310739669.1A CN116844084A (en) 2023-06-21 2023-06-21 Sports motion analysis and correction method and system integrating blockchain

Publications (1)

Publication Number Publication Date
CN116844084A true CN116844084A (en) 2023-10-03

Family

ID=88171769

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310739669.1A Pending CN116844084A (en) 2023-06-21 2023-06-21 Sports motion analysis and correction method and system integrating blockchain

Country Status (1)

Country Link
CN (1) CN116844084A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117746513A (en) * 2024-02-19 2024-03-22 成都体育学院 Motion technology teaching method and system based on video moving object detection and fusion

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117746513A (en) * 2024-02-19 2024-03-22 成都体育学院 Motion technology teaching method and system based on video moving object detection and fusion
CN117746513B (en) * 2024-02-19 2024-04-30 成都体育学院 Motion technology teaching method and system based on video moving object detection and fusion

Similar Documents

Publication Publication Date Title
CN111709409B (en) Face living body detection method, device, equipment and medium
CN109191588B (en) Motion teaching method, motion teaching device, storage medium and electronic equipment
CN108734104B (en) Body-building action error correction method and system based on deep learning image recognition
US11776421B2 (en) Systems and methods for monitoring and evaluating body movement
CN108764120B (en) Human body standard action evaluation method
CN110448870B (en) Human body posture training method
US11113988B2 (en) Apparatus for writing motion script, apparatus for self-teaching of motion and method for using the same
CN109598226B (en) Online examination cheating judgment method based on Kinect color and depth information
CN116844084A (en) Sports motion analysis and correction method and system integrating blockchain
Zou et al. Intelligent fitness trainer system based on human pose estimation
US11954869B2 (en) Motion recognition-based interaction method and recording medium
CN114022512A (en) Exercise assisting method, apparatus and medium
KR20220013347A (en) System for managing and evaluating physical education based on artificial intelligence based user motion recognition
CN112149472A (en) Artificial intelligence-based limb action recognition and comparison method
Yang et al. Research on face recognition sports intelligence training platform based on artificial intelligence
Lin et al. The effect of real-time pose recognition on badminton learning performance
Li et al. Intelligent correction method of shooting action based on computer vision
KR20170140756A (en) Appratus for writing motion-script, appratus for self-learning montion and method for using the same
CN111563443A (en) Continuous motion action accuracy evaluation method
CN116704603A (en) Action evaluation correction method and system based on limb key point analysis
CN113947811A (en) Taijiquan action correction method and system based on generation of confrontation network
Talaat Novel deep learning models for yoga pose estimator
CN112185508A (en) Eye exercises quality evaluation system based on artificial intelligence
CN115578786A (en) Motion video detection method, device, equipment and storage medium
CN117423166B (en) Motion recognition method and system according to human body posture image data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination