CN113239849B - Body-building action quality assessment method, body-building action quality assessment system, terminal equipment and storage medium - Google Patents

Body-building action quality assessment method, body-building action quality assessment system, terminal equipment and storage medium Download PDF

Info

Publication number
CN113239849B
CN113239849B CN202110582037.XA CN202110582037A CN113239849B CN 113239849 B CN113239849 B CN 113239849B CN 202110582037 A CN202110582037 A CN 202110582037A CN 113239849 B CN113239849 B CN 113239849B
Authority
CN
China
Prior art keywords
action
standard
force
skeleton position
user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110582037.XA
Other languages
Chinese (zh)
Other versions
CN113239849A (en
Inventor
林承瑜
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Digital Gravity Xiamen Sports Technology Co ltd
Original Assignee
Digital Gravity Xiamen Sports Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Digital Gravity Xiamen Sports Technology Co ltd filed Critical Digital Gravity Xiamen Sports Technology Co ltd
Priority to CN202110582037.XA priority Critical patent/CN113239849B/en
Publication of CN113239849A publication Critical patent/CN113239849A/en
Application granted granted Critical
Publication of CN113239849B publication Critical patent/CN113239849B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a body-building action quality assessment method, a body-building action quality assessment system, a terminal device and a storage medium, wherein the body-building action quality assessment method comprises the following steps: performing action acquisition on body-building actions of a user to obtain action acquisition signals, performing feature extraction on the action acquisition signals to obtain human skeleton position features and human action force features, and determining standard skeleton position features and standard action force features corresponding to the body-building actions; comparing the human skeleton position characteristics with the standard skeleton position characteristics and the human action force characteristics with the standard action force characteristics to obtain skeleton position characteristic similarity and action force characteristic similarity; and generating a quality evaluation result of the body-building action according to the skeleton position characteristic similarity and the action force characteristic similarity. According to the invention, the quality evaluation result of the body-building action is generated through the skeleton position feature similarity and the action force feature similarity, and the body-building action of the user can be effectively guided and corrected based on the quality evaluation result.

Description

Body-building action quality assessment method, body-building action quality assessment system, terminal equipment and storage medium
Technical Field
The invention belongs to the field of intelligent body building, and particularly relates to a body building action quality assessment method, a body building action quality assessment system, terminal equipment and a storage medium.
Background
In recent years, with the rapid development of artificial intelligence, the deep neural network can calculate the position of a human skeleton from an image in real time, and a somatosensory method based on the deep neural network can be used in the fields of man-machine interaction, augmented reality and the like. On the other hand, sensor technology is also rapidly developing, and various force touch sensors which are portable in size and stable in performance can be rapidly installed and integrated on various terminal devices.
When the body-building action quality assessment in most intelligent body-building applications takes comparison of human skeleton positions as a main action quality assessment means, the body-building action quality assessment is single in basis, and perception and assessment of action strength are lacked, so that the accuracy of body-building action quality assessment is reduced.
Disclosure of Invention
The embodiment of the invention aims to provide a body-building action quality assessment method, a body-building action quality assessment system, a terminal device and a storage medium, and aims to solve the problem that in the existing body-building action quality assessment process, the body-building action quality assessment basis is single due to lack of perception and assessment of action force.
The embodiment of the invention is realized in such a way that a body-building action quality assessment method comprises the following steps:
Performing action acquisition on body-building actions of a user to obtain action acquisition signals, and performing feature extraction on the action acquisition signals to obtain human skeleton position features and human action force features;
determining standard actions corresponding to the body-building actions in a standard body-building action library, and inquiring standard skeleton position characteristics and standard action force characteristics corresponding to the standard actions;
performing feature comparison on the human skeleton position features and the standard skeleton position features to obtain skeleton position feature similarity, and performing feature comparison on the human motion force features and the standard motion force features to obtain motion force feature similarity;
and generating a quality evaluation result of the body-building action according to the skeleton position characteristic similarity and the action force characteristic similarity.
Further, the step of comparing the human skeleton position feature with the standard skeleton position feature to obtain a skeleton position feature similarity includes:
determining user action key points in the human skeleton position features, and determining corresponding reference action key points of the user action key points in the standard skeleton position features;
Calculating the distance between the user action key point and the corresponding reference action key point to obtain a key point distance;
determining an image area of the user on a visual image in the motion acquisition signal, and determining a visibility mark of the user motion key point on the visual image, wherein the visibility mark is used for representing whether the user motion key point is visible on the visual image or not;
and determining a normalization factor according to the standard action, and calculating the skeleton position feature similarity according to the normalization factor, the key point distance, the image area and the visibility mark.
Further, a calculation formula adopted by the calculation of the feature similarity of the skeleton position according to the normalization factor, the key point distance, the image area and the visibility mark is as follows:
OKS is the similarity of the skeleton position characteristics, d i Is the distance between the key points corresponding to the ith key point of the user action, s is the square root of the image area, sigma i Is the normalization factor, v i Is the visibility mark corresponding to the ith user action key point.
Further, the step of comparing the human motion force feature with the standard motion force feature to obtain motion force feature similarity includes:
Generating a user action force value curve according to the human action force characteristics, and generating a standard action force value curve according to the standard action force characteristics;
if the number of action time points between the user action force value curve and the standard action force value curve is different, carrying out dynamic time warping on the user action force value curve and the standard action force value curve;
after the dynamic time warping is calculated respectively, the distance between the user action force value curve and the standard action force value curve at the same action time point is obtained;
setting element parameters of a preset matrix network according to the strength value distance, and determining the action strength value curve of the user and the standard action strength value curve, wherein the shortest path on the preset matrix network is set by the element parameters;
and calculating the sum of the distances of the corresponding force values of the elements on the shortest path to obtain the similarity of the action force characteristics.
Further, after the standard motion dynamics value curve is generated according to the standard motion dynamics feature, the method further includes:
if the number of action time points between the user action force value curve and the standard action force value curve is the same, respectively calculating the distance between the user action force value curve and the standard action force value curve at the same action time point to obtain a force value distance;
And calculating the sum of the distances of the force values between different action time points to obtain the action force characteristic similarity.
Further, the feature extraction of the motion acquisition signal to obtain a human skeleton position feature and a human motion force feature includes:
inputting the visual image signals in the action acquisition signals into a preset convolution network for feature extraction to obtain image features, and inputting the image features into a pre-trained integral posture estimation network for posture analysis to obtain coordinates of key points of a human body;
inputting the coordinates of the human body key points into a pre-trained confidence level mapping network for confidence level analysis to obtain key point confidence levels, and determining affinity vectors among different human body key points according to the key point confidence levels;
clustering the human body key points according to the affinity vector, and assembling the clustered human body key points to obtain the human body skeleton position characteristics;
and determining a motion resistance change value according to the force touch signal in the motion acquisition signal, and determining the motion force characteristic according to the motion resistance change value.
Further, before determining the standard motion corresponding to the exercise motion in the standard exercise motion library, the method further includes:
performing action collection on the body-building action of the appointed user to obtain a sample collection signal, and performing feature extraction on the sample collection signal to obtain a sample skeleton position feature and a sample action force feature;
performing time sequence synchronization processing on the sample skeleton position features and the sample action force features to obtain a sample skeleton position sequence and a sample action force sequence;
and respectively carrying out bilateral filtering treatment on the sample skeleton position sequence and the sample action force sequence, and carrying out action segmentation on the sample skeleton position sequence and the sample action force sequence after bilateral filtering treatment to obtain the standard action.
It is another object of an embodiment of the present invention to provide a fitness action quality assessment system, the system comprising:
the feature extraction module is used for acquiring the body-building actions of the user to obtain action acquisition signals, and extracting features of the action acquisition signals to obtain the position features and the action force features of the human body framework;
the standard action determining module is used for determining standard actions corresponding to the body-building actions in the standard body-building action library and inquiring standard skeleton position characteristics and standard action force characteristics corresponding to the standard actions;
The feature comparison module is used for comparing the human skeleton position features with the standard skeleton position features to obtain skeleton position feature similarity, and comparing the human motion force features with the standard motion force features to obtain motion force feature similarity;
and the quality evaluation result generation module is used for generating a quality evaluation result of the body-building action according to the skeleton position characteristic similarity and the action force characteristic similarity.
It is a further object of an embodiment of the present invention to provide a terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, which processor implements the steps of the method as described above when executing the computer program.
It is a further object of embodiments of the present invention to provide a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the above method.
According to the embodiment of the invention, the exercise action of the user is acquired, the action acquisition signal is obtained, the characteristic extraction is carried out on the action acquisition signal, the human skeleton position characteristic and the human action force characteristic corresponding to the exercise action of the user can be effectively extracted, the standard action corresponding to the exercise action is determined in the standard exercise action library, the standard skeleton position characteristic and the standard action force characteristic corresponding to the standard action are inquired, the accuracy of characteristic comparison between the human skeleton position characteristic and the standard skeleton position characteristic and between the human action force characteristic and the standard action force characteristic is improved, the similarity between the exercise action of the user and the standard action can be effectively determined based on the characteristic comparison between the human skeleton position characteristic and the standard skeleton position characteristic and between the human action force characteristic and the standard action force characteristic, the diversity of exercise action quality assessment is improved, the quality assessment result of the exercise action is generated through the skeleton position characteristic similarity and the action force characteristic similarity, the user exercise action can be effectively guided and corrected based on the quality assessment result, and the accuracy of the exercise action quality assessment of the user is improved.
Drawings
FIG. 1 is a flow chart of a method for evaluating quality of exercise activities provided by a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a human skeleton morphology example and a motion dynamics curve morphology in an active state and an inactive state according to the first embodiment of the present invention;
FIG. 3 is a flow chart of a method for evaluating quality of exercise activity according to a second embodiment of the present invention;
FIG. 4 is a schematic diagram of a third embodiment of a system for evaluating quality of exercise activity according to the present invention;
FIG. 5 is a structural frame diagram of a quality assessment system for exercise activity provided by a third embodiment of the present invention;
fig. 6 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, in order to make the objects, technical solutions and advantages of the present invention more apparent. It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In order to illustrate the technical scheme of the invention, the following description is made by specific examples.
Example 1
Referring to fig. 1, a flowchart of a method for evaluating quality of exercise according to a first embodiment of the present invention may be applied to any intelligent exercise terminal device, where the intelligent exercise terminal device includes an intelligent exercise mirror, a mobile phone, a tablet or a wearable intelligent device, and the method for evaluating quality of exercise includes the steps of:
Step S10, performing action acquisition on body-building actions of a user to obtain action acquisition signals, and performing feature extraction on the action acquisition signals to obtain human skeleton position features and human action force features;
the action acquisition signals comprise visual image acquisition signals and force touch acquisition signals, and the acquisition of the visual image signals is usually realized by a camera on the intelligent body-building terminal equipment. The user stands in front of the intelligent body-building terminal equipment and performs standard body-building actions, and the camera on the intelligent body-building terminal equipment can acquire visual image signals when the user performs body-building actions in real time.
The collection of the force touch signals is usually realized by a left hand tension arm and a right hand tension arm on the intelligent body-building terminal equipment, and a force touch sensor is required to be installed in the tension arm. The user applies tensile force with different intensities to cause the resistance strain gauge metal wires in the force touch sensor to stretch or twist to different degrees, so that the resistance value is changed, a sequence of resistance value change values is generated, and the force touch acquisition signal is obtained.
For each frame of image in the visual image signal, a human skeleton extraction method based on a preset convolution network is used to obtain human skeleton position characteristics, and for resistance value change of each frame in the force touch acquisition signal, the resistance value change can be converted into action force values through function mapping to obtain the human action force characteristics.
Optionally, in this step, the feature extraction is performed on the motion acquisition signal to obtain a human skeleton position feature and a human motion dynamics feature, including:
inputting the visual image signals in the action acquisition signals into a preset convolution network for feature extraction to obtain image features, and inputting the image features into a pre-trained integral posture estimation network for posture analysis to obtain coordinates of key points of a human body;
the preset convolution network may be set according to requirements, for example, the preset convolution network may be set as a VGG (Visual Geometry Group) network, and the visual image signal is input into the preset convolution network to perform feature extraction, so as to extract image features corresponding to the exercise motion in the visual image signal.
Optionally, 18 human body key points are defined in the step, namely, a nose, a head, a right shoulder, a right elbow, a right hand head, a left shoulder, a left elbow, a left hand head, a right waist, a right knee, a right foot head, a left waist, a left knee, a left foot head, a right eye, a right ear, a left eye and a left ear, respectively, and gesture analysis is performed by inputting image features into a pre-trained integral gesture estimation network so as to obtain coordinates of different human body key points on a user.
Inputting the coordinates of the human body key points into a pre-trained confidence level mapping network for confidence level analysis to obtain key point confidence levels, and determining affinity vectors among different human body key points according to the key point confidence levels;
the coordinates of the key points of the human body are input into the pre-trained confidence level mapping network for confidence level analysis, so that the confidence levels of the key points corresponding to the key points of different human bodies can be effectively obtained, and affinity vectors among the key points of different human bodies can be calculated according to the confidence levels of the key points.
Clustering the human body key points according to the affinity vector, and assembling the clustered human body key points to obtain the human body skeleton position characteristics;
and clustering the human body key points according to the affinity vector to determine the key points corresponding to the nose, the head, the right shoulder, the right elbow, the right hand head, the left shoulder, the left elbow, the left hand head, the right waist, the right knee, the right foot head, the left waist, the left knee, the left foot head, the right eye, the right ear, the left eye and the left ear, and assembling the determined nose, the head, the right shoulder, the right elbow, the right hand head, the left shoulder, the left elbow, the left hand head, the right waist, the right knee, the right foot head, the left waist, the left knee, the left foot head, the right eye, the right ear, the left eye and the left ear to obtain the human body skeleton position feature, wherein the human body skeleton position feature can be a human body skeleton image.
And determining a motion resistance change value according to the force touch signal in the motion acquisition signal, and determining the motion force characteristic according to the motion resistance change value.
The resistance change information acquired by the force touch signal sensor when a user uses a mechanical tension arm is converted into force information aiming at a force touch signal of a single frame. Knowing the change relation between force and resistance value, the two can be converted, and the change DeltaR of resistance value can be converted into action force value F by function mapping phi, which can be summarized as the following formula
F=φ(ΔR)。
Further, in this step, after the human skeleton position feature and the human motion force feature are obtained, the method further includes:
and carrying out time sequence synchronization processing on the human skeleton position characteristics and the action force characteristics, wherein the time sequence synchronization processing is used for adjusting the human skeleton position characteristics and the action force characteristics to the same frame rate, and the visual image sensor and the force touch sensor have different frequencies/frame rates when acquiring data, so that the extracted human skeleton position characteristics and the extracted action force characteristics have different frame rates. In order to ensure that the position features and the action force features of the human body framework correspond to each other in time sequence, the time sequence synchronization processing is required to be carried out on the position features and the action force features of the human body framework. The registration mode can adopt a mode of downsampling high-frame-rate information, so that the frame rate of the body skeleton position characteristic and the motion dynamics characteristic with higher frame rate is reduced, and finally the body skeleton position characteristic and the motion dynamics characteristic have the same frame rate, so that the synchronization on time sequence is realized.
Further, in this step, before determining the standard motion corresponding to the exercise motion in the standard exercise motion library, the method further includes:
performing action collection on the body-building action of the appointed user to obtain a sample collection signal, and performing feature extraction on the sample collection signal to obtain a sample skeleton position feature and a sample action force feature;
the sample collection signals collected by the intelligent body-building terminal device comprise visual image collection signals and force touch collection signals, the mode of feature extraction of the sample collection signals is the same as the mode of feature extraction of the action collection signals, and the detailed description is omitted.
And carrying out time sequence synchronous processing on the sample skeleton position features and the sample action force features to obtain a sample skeleton position sequence and a sample action force sequence, wherein the time sequence synchronous processing on the sample skeleton position features and the sample action force features is the same as the time sequence synchronous processing operation on the human skeleton position features and the action force features, and is not repeated here.
Performing bilateral filtering processing on the sample skeleton position sequence and the sample action force sequence respectively, and performing action segmentation on the sample skeleton position sequence and the sample action force sequence subjected to bilateral filtering processing to obtain the standard action;
The sample skeleton position sequence and the sample action force sequence have larger noise and abnormality, and the noise and the abnormality mainly originate from error key points generated when the human skeleton position is calculated, unavoidable noise in the generation and acceptance processes of resistance signals and the like. The bilateral filtering processing is used for respectively filtering the human skeleton position sequence and the action force sequence, so that noise is removed, and meanwhile, effective characteristics in the two sequences are reserved as much as possible.
Specifically, a shorter time window T, w (T, T 0 ) Indicating a time t within the time window 0 The weight acting on time t can be calculated from the following equation:
wherein the method comprises the steps ofRespectively the variance is sigma s ,σ r Is a gaussian function of ||t 0 -t is the time difference value of two moments, s can be expressed as the human skeleton position or motion force value at a moment,/o>Is the difference between the two moments. the human skeleton position or motion force st at the moment T is a weighted average of all values in the time window T, and can be calculated by the following formula:
in this step, the main function of motion segmentation is to extract a single motion from a continuously received stream of human skeleton positions and motion dynamics values. Because fitness exercises are continuous, it is important to automatically and accurately segment the corresponding actions from the extracted two types of data streams.
The action segmentation includes two steps: firstly, identifying whether a designated user is currently in an active state or an inactive state by comparing the position of a human skeleton and action strength in a static state; then, a flag is set for each independent action start and end for the sequence.
In a state where the human body is stationary, the human body skeleton may take on a specific form. And calculating the object key point similarity between the human skeleton position at each moment and the human skeleton position of the specific form (0KS,Object Keypoints Similarity). In addition, the motion force value should approach zero in the human body static state. Only when the OKS similarity is higher than a certain threshold value and the force value is close to zero, the human body is judged to be in an inactive state at the moment. Referring to fig. 2, a human skeleton morphology example and an action force curve morphology example in an active state and an inactive state are shown.
For a certain human skeleton position and action force value sequence, the activity state curve of the human skeleton position and action force value sequence can be calculated, as shown in fig. 2. When the inactive state jumps to the active state and the duration of the active state exceeds 3 seconds, marking the jump as an action start; when the active state jumps to the inactive state, the duration of the active state exceeds 3 seconds, and the jump is marked as the end of the action; the action start identifier and the action end identifier must be paired to occur, and the action start identifier is before the action end identifier.
In the step, the standard actions obtained by segmentation, the corresponding sample skeleton position sequences and the corresponding sample action force sequences are stored in a standard body-building action library, and the sample skeleton position sequences and the sample action force sequences stored in the standard body-building action library are standard skeleton position features and standard action force features corresponding to the standard actions.
Step S20, determining standard actions corresponding to the body-building actions in a standard body-building action library, and inquiring standard skeleton position characteristics and standard action force characteristics corresponding to the standard actions;
the method comprises the steps of obtaining a query instruction sent by a user, and determining a standard action corresponding to the body-building action of the user in a standard body-building action library according to a specified identifier in the query instruction. In the step, the standard body-building action library stores the corresponding relations between different standard actions and the corresponding standard skeleton position characteristics and standard action force characteristics.
Step S30, comparing the human body skeleton position characteristics with the standard skeleton position characteristics to obtain skeleton position characteristic similarity, and comparing the human body action force characteristics with the standard action force characteristics to obtain action force characteristic similarity;
The feature comparison between the human skeleton position features and the standard skeleton position features can be carried out by comparing the human skeleton position features with the standard skeleton position features, so that the similarity between the body-building action of the user in the aspect of the human skeleton position and the standard action can be effectively calculated, and in the step, the feature comparison between the human skeleton position features and the standard skeleton position features can be carried out by adopting an object key point similarity algorithm.
In the step, the similarity between the motion force aspect and the standard motion of the body-building motion of the user can be effectively calculated by comparing the human motion force characteristic with the standard motion force characteristic, and optionally, a dynamic time warping algorithm (Dynamic Time Warping, DWT) can be adopted for comparing the human motion force characteristic with the standard motion force characteristic.
Step S40, generating a quality evaluation result of the body-building action according to the skeleton position feature similarity and the action force feature similarity;
the quality evaluation result of the body-building action is generated according to the feature similarity of the skeleton position and the feature similarity of the action force, and the quality evaluation of the body-building action of the user at each moment can be obtained based on the quality evaluation result of the body-building action, so that the body-building action is used for prompting and correcting nonstandard/wrong body-building actions.
According to the embodiment, the exercise action of the user is acquired, the action acquisition signal is obtained, the characteristic extraction is carried out on the action acquisition signal, the human skeleton position characteristic and the human action force characteristic corresponding to the exercise action of the user can be effectively extracted, the standard action corresponding to the exercise action is determined in the standard exercise action library, the standard skeleton position characteristic and the standard action force characteristic corresponding to the standard action are inquired, the accuracy of characteristic comparison between the human skeleton position characteristic and the standard skeleton position characteristic and between the human action force characteristic and the standard action force characteristic is improved, the similarity between the exercise action of the user and the standard action can be effectively determined based on the characteristic comparison between the human skeleton position characteristic and the standard skeleton position characteristic and between the human action force characteristic and the standard action force characteristic, the diversity of exercise action quality assessment is improved, the quality assessment result of the exercise action is generated through the skeleton position characteristic similarity and the action force characteristic similarity, the user exercise action can be effectively guided and corrected based on the quality assessment result, and the accuracy of exercise action quality assessment is improved.
Example two
Referring to fig. 3, a flowchart of a method for evaluating quality of exercise according to a second embodiment of the present invention is provided, and the method is used for further refining step S30, and includes the steps of:
step S31, determining user action key points in the human skeleton position features, and determining corresponding reference action key points of the user action key points in the standard skeleton position features;
the method comprises the steps of identifying key points of human skeleton position features to determine user action key points in the human skeleton position features, and determining corresponding reference action key points of the user action key points in standard skeleton position features through the identification of the determined user action key points.
For example, when the key point a1 in the human skeleton position feature is determined to be the right shoulder of the key point, determining a reference action key point corresponding to the right shoulder in the standard skeleton position feature according to the identification of the right shoulder of the key point.
Step S32, calculating the distance between the user action key point and the corresponding reference action key point to obtain a key point distance;
the euclidean distance between each user action key point and the corresponding reference action key point is calculated, and the euclidean distance is the euclidean distance of the leaf city and is often used for measuring the absolute distance between two points in the multidimensional space. The formula is as follows:
Wherein p is 1 ,p 2 The user action key point and the reference action key point are respectively, and the two points p 1 ,p 2 Is (x) 1 ,y 1 ) And (x) 2 ,y 2 )。
Step S33, determining the image area of the user on a visual image in the action acquisition signal, and determining the visibility mark of the user action key point on the visual image;
the visibility mark is used for representing whether the user action key point is visible on the visual image, when the visibility mark is 0, the corresponding user action key point is not marked, when the visibility mark is 1, the corresponding user action key point is marked but blocked in the image, and when the visibility mark is 2, the corresponding user action key point is marked and visible in the image.
And step S34, determining a normalization factor according to the standard action, and calculating the feature similarity of the skeleton position according to the normalization factor, the key point distance, the image area and the visibility mark.
Optionally, in this step, a calculation formula adopted by the calculating the feature similarity of the skeleton position according to the normalization factor, the key point distance, the image area and the visibility mark is:
OKS is the similarity of the skeleton position characteristics, d i Is the ith user action key pointThe corresponding keypoint distance, s, is the square root, σ, of the image area i Is the normalization factor, v i Is the visibility mark corresponding to the ith user action key point.
Optionally, wherein δ (-) results in 1 when the condition is met, otherwise 0.
In the above flow, the numerator in OKS formula is equivalent to a Gaussian distribution centered on the reference action key point true value, s is used for scale normalization, sigma i Is the standard deviation describing the contribution of each keypoint, and the addition of these two factors makes OKS perceptually meaningful and helpful to understand when measuring keypoint similarity. However, the OKS formula only considers the problem that the proportion of two persons in the video is inconsistent, and does not consider the physical difference between the user and the coach, for example, when the proportion of the upper body to the lower body of the two persons is different, the two persons act in place in time, but the key points cannot correspond. Therefore, in order to reduce the influence of this factor, the embodiment calculates the ratio of the included angles of the connecting lines between the reference joints before calculating the OKS similarity, and then reconstructs a reference key point value corresponding to the user figure ratio by combining the human skeleton key points of the image when the user stands.
Further, in this embodiment, the step of comparing the motion force feature of the human body with the standard motion force feature to obtain a motion force feature similarity includes:
generating a user action force value curve according to the human action force characteristics, and generating a standard action force value curve according to the standard action force characteristics;
if the number of action time points between the user action force value curve and the standard action force value curve is different, carrying out dynamic time warping on the user action force value curve and the standard action force value curve;
after the dynamic time warping is calculated, the distance between the user action force value curve and the standard action force value curve at the same action time point is calculated to obtain a force value distance, wherein the force value distance is the similarity of force values at the same action time point between the user action force value curve and the standard action force value curve;
setting element parameters of a preset matrix network according to the strength value distance, and determining the action strength value curve of the user and the standard action strength value curve, wherein the shortest path on the preset matrix network is set by the element parameters;
And calculating the sum of the distances of the corresponding force values of the elements on the shortest path to obtain the similarity of the action force characteristics.
Further, after the standard motion dynamics value curve is generated according to the standard motion dynamics characteristic, the method further includes: if the number of action time points between the user action force value curve and the standard action force value curve is the same, respectively calculating the distance between the user action force value curve and the standard action force value curve at the same action time point to obtain a force value distance; and calculating the sum of the distances of the force values between different action time points to obtain the action force characteristic similarity.
In the embodiment, the accuracy of calculation of the distance between the key points is improved by determining the key points of the user action in the position characteristics of the human skeleton and determining the corresponding reference key points of the user action in the position characteristics of the standard skeleton, and the calculation of the similarity of the position characteristics of the skeleton is ensured by determining the image area of the user on the visual image in the action acquisition signal and determining the visibility mark of the key points of the user action on the visual image.
Example III
Referring to fig. 4, a schematic structural diagram of a exercise quality assessment system 100 according to a third embodiment of the present invention includes: a feature extraction module 10, a standard action determination module 11, a feature comparison module 12, and a quality evaluation result generation module 13, wherein:
the feature extraction module 10 is configured to perform motion acquisition on a body-building motion of a user to obtain a motion acquisition signal, and perform feature extraction on the motion acquisition signal to obtain a human skeleton position feature and a human motion force feature.
Wherein the feature extraction module 10 is further configured to: inputting the visual image signals in the action acquisition signals into a preset convolution network for feature extraction to obtain image features, and inputting the image features into a pre-trained integral posture estimation network for posture analysis to obtain coordinates of key points of a human body;
inputting the coordinates of the human body key points into a pre-trained confidence level mapping network for confidence level analysis to obtain key point confidence levels, and determining affinity vectors among different human body key points according to the key point confidence levels;
clustering the human body key points according to the affinity vector, and assembling the clustered human body key points to obtain the human body skeleton position characteristics;
And determining a motion resistance change value according to the force touch signal in the motion acquisition signal, and determining the motion force characteristic according to the motion resistance change value.
Optionally, the feature extraction module 10 is further configured to: performing action collection on the body-building action of the appointed user to obtain a sample collection signal, and performing feature extraction on the sample collection signal to obtain a sample skeleton position feature and a sample action force feature;
performing time sequence synchronization processing on the sample skeleton position features and the sample action force features to obtain a sample skeleton position sequence and a sample action force sequence;
and respectively carrying out bilateral filtering treatment on the sample skeleton position sequence and the sample action force sequence, and carrying out action segmentation on the sample skeleton position sequence and the sample action force sequence after bilateral filtering treatment to obtain the standard action.
The standard motion determining module 11 is configured to determine a standard motion corresponding to the exercise motion in a standard exercise motion library, and query a standard skeleton position feature and a standard motion dynamics feature corresponding to the standard motion.
And the feature comparison module 12 is configured to perform feature comparison on the human skeleton position feature and the standard skeleton position feature to obtain a skeleton position feature similarity, and perform feature comparison on the human motion force feature and the standard motion force feature to obtain a motion force feature similarity.
Wherein the feature comparison module 12 is further configured to: determining user action key points in the human skeleton position features, and determining corresponding reference action key points of the user action key points in the standard skeleton position features;
calculating the distance between the user action key point and the corresponding reference action key point to obtain a key point distance;
determining an image area of the user on a visual image in the motion acquisition signal, and determining a visibility mark of the user motion key point on the visual image, wherein the visibility mark is used for representing whether the user motion key point is visible on the visual image or not;
and determining a normalization factor according to the standard action, and calculating the skeleton position feature similarity according to the normalization factor, the key point distance, the image area and the visibility mark.
Optionally, a calculation formula adopted by the calculating the feature similarity of the skeleton position according to the normalization factor, the key point distance, the image area and the visibility mark is as follows:
OKS is the similarity of the skeleton position characteristics, d i Is the distance between the key points corresponding to the ith key point of the user action, s is the square root of the image area, sigma i Is the normalization factor, v i Is the visibility mark corresponding to the ith user action key point.
Optionally, the feature comparison module 12 is further configured to: generating a user action force value curve according to the human action force characteristics, and generating a standard action force value curve according to the standard action force characteristics;
if the number of action time points between the user action force value curve and the standard action force value curve is different, carrying out dynamic time warping on the user action force value curve and the standard action force value curve;
after the dynamic time warping is calculated respectively, the distance between the user action force value curve and the standard action force value curve at the same action time point is obtained;
setting element parameters of a preset matrix network according to the strength value distance, and determining the action strength value curve of the user and the standard action strength value curve, wherein the shortest path on the preset matrix network is set by the element parameters;
and calculating the sum of the distances of the corresponding force values of the elements on the shortest path to obtain the similarity of the action force characteristics.
Further, the feature comparison module 12 is further configured to: if the number of action time points between the user action force value curve and the standard action force value curve is the same, respectively calculating the distance between the user action force value curve and the standard action force value curve at the same action time point to obtain a force value distance;
and calculating the sum of the distances of the force values between different action time points to obtain the action force characteristic similarity.
And the quality evaluation result generation module 13 is used for generating a quality evaluation result of the body-building action according to the skeleton position feature similarity and the action force feature similarity.
Referring to fig. 5, a structural frame diagram of a fitness action quality assessment system 100 according to a third embodiment of the present invention includes a data acquisition module, a feature extraction module, and an action comparison module. The data acquisition module acquires action acquisition signals through the visual image sensor and the force touch sensor. The feature extraction module calculates the human skeleton position and the motion force corresponding to the motion acquisition signals of each frame by using a human skeleton extraction method based on a convolutional neural network and a mapping method from resistance values to force values. Then, the human skeleton position and action force value data stream is transmitted into the action comparison module in real time. In the action comparison module, after time sequence synchronization, firstly, comparing the positions of human frameworks based on an object key point similarity algorithm to obtain the spatial position similarity of the human frameworks of the current user action and the standard action; and then comparing the force value curves based on a dynamic time normalization algorithm to obtain the similarity of the current user action force value curve and the standard action force value curve. The two similarities are combined to obtain the exercise quality evaluation of the user at each moment, and the method can be used for prompting and correcting nonstandard/wrong exercise.
According to the embodiment, the exercise action of the user is acquired, the action acquisition signal is obtained, the characteristic extraction is carried out on the action acquisition signal, the human skeleton position characteristic and the human action force characteristic corresponding to the exercise action of the user can be effectively extracted, the standard action corresponding to the exercise action is determined in the standard exercise action library, the standard skeleton position characteristic and the standard action force characteristic corresponding to the standard action are inquired, the accuracy of characteristic comparison between the human skeleton position characteristic and the standard skeleton position characteristic and between the human action force characteristic and the standard action force characteristic is improved, the similarity between the exercise action of the user and the standard action can be effectively determined based on the characteristic comparison between the human skeleton position characteristic and the standard skeleton position characteristic and between the human action force characteristic and the standard action force characteristic, the diversity of exercise action quality assessment is improved, the quality assessment result of the exercise action is generated through the skeleton position characteristic similarity and the action force characteristic similarity, the user exercise action can be effectively guided and corrected based on the quality assessment result, and the accuracy of exercise action quality assessment is improved.
Example IV
Fig. 6 is a block diagram of a terminal device 2 according to a fourth embodiment of the present application. As shown in fig. 6, the terminal device 2 of this embodiment includes: a processor 20, a memory 21 and a computer program 22 stored in said memory 21 and executable on said processor 20, such as a program of a fitness activity quality assessment method. The steps of the various embodiments of the exercise quality assessment method described above, such as S10 to S40 shown in fig. 1 or S31 to S34 shown in fig. 3, are implemented by the processor 20 when executing the computer program 23. Alternatively, the processor 20 may implement the functions of each unit in the embodiment corresponding to fig. 4, for example, the functions of the units 10 to 13 shown in fig. 4, when executing the computer program 22, and the detailed description of the embodiment corresponding to fig. 4 will be referred to herein, which is omitted.
Illustratively, the computer program 22 may be partitioned into one or more units that are stored in the memory 21 and executed by the processor 20 to complete the present application. The one or more units may be a series of computer program instruction segments capable of performing a specific function for describing the execution of the computer program 22 in the terminal device 2. For example, the computer program 22 may be divided into a feature extraction module 10, a standard action determination module 11, a feature comparison module 12 and a quality evaluation result generation module 13, each unit functioning specifically as described above.
The terminal device may include, but is not limited to, a processor 20, a memory 21. It will be appreciated by those skilled in the art that fig. 6 is merely an example of the terminal device 2 and does not constitute a limitation of the terminal device 2, and may include more or less components than illustrated, or may combine certain components, or different components, e.g., the terminal device may further include an input-output device, a network access device, a bus, etc.
The processor 20 may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific lntegrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 21 may be an internal storage unit of the terminal device 2, such as a hard disk or a memory of the terminal device 2. The memory 21 may be an external storage device of the terminal device 2, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the terminal device 2. Further, the memory 21 may also include both an internal storage unit and an external storage device of the terminal device 2. The memory 21 is used for storing the computer program as well as other programs and data required by the terminal device. The memory 21 may also be used for temporarily storing data that has been output or is to be output.
In addition, each functional unit in each embodiment of the present application may be integrated in one processing unit, or each unit may exist alone physically, or two or more units may be integrated in one unit. The integrated units may be implemented in hardware or in software functional units.
The integrated modules, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Wherein the computer readable storage medium may be nonvolatile or volatile. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each method embodiment described above. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, executable files or in some intermediate form, etc. The computer readable storage medium may include: any entity or device capable of carrying computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the content of the computer readable storage medium may be appropriately scaled according to the requirements of jurisdictions in which such computer readable storage medium does not include electrical carrier signals and telecommunication signals, for example, according to jurisdictions and patent practices.
The above embodiments are only for illustrating the technical solution of the present application, and are not limiting; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present application, and are intended to be included in the scope of the present application.

Claims (8)

1. A method of quality assessment of exercise activity, the method comprising:
performing action acquisition on body-building actions of a user to obtain action acquisition signals, and performing feature extraction on the action acquisition signals to obtain human skeleton position features and human action force features;
determining standard actions corresponding to the body-building actions in a standard body-building action library, and inquiring standard skeleton position characteristics and standard action force characteristics corresponding to the standard actions;
performing feature comparison on the human skeleton position features and the standard skeleton position features to obtain skeleton position feature similarity, and performing feature comparison on the human motion force features and the standard motion force features to obtain motion force feature similarity;
Generating a quality evaluation result of the body-building action according to the skeleton position characteristic similarity and the action force characteristic similarity;
the step of comparing the human motion force characteristics with the standard motion force characteristics to obtain motion force characteristic similarity comprises the following steps:
generating a user action force value curve according to the human action force characteristics, and generating a standard action force value curve according to the standard action force characteristics;
if the number of action time points between the user action force value curve and the standard action force value curve is different, carrying out dynamic time warping on the user action force value curve and the standard action force value curve;
after the dynamic time warping is calculated respectively, the distance between the user action force value curve and the standard action force value curve at the same action time point is obtained;
setting element parameters of a preset matrix network according to the strength value distance, and determining the action strength value curve of the user and the standard action strength value curve, wherein the shortest path on the preset matrix network is set by the element parameters;
Calculating the sum of the distances of the corresponding force values of the elements on the shortest path to obtain the similarity of the action force characteristics;
after the standard action force value curve is generated according to the standard action force characteristics, the method further comprises the following steps:
if the number of action time points between the user action force value curve and the standard action force value curve is the same, respectively calculating the distance between the user action force value curve and the standard action force value curve at the same action time point to obtain a force value distance;
and calculating the sum of the distances of the force values between different action time points to obtain the action force characteristic similarity.
2. The method for evaluating the quality of exercise according to claim 1, wherein the step of comparing the human skeleton position feature with the standard skeleton position feature to obtain the skeleton position feature similarity comprises:
determining user action key points in the human skeleton position features, and determining corresponding reference action key points of the user action key points in the standard skeleton position features;
calculating the distance between the user action key point and the corresponding reference action key point to obtain a key point distance;
Determining an image area of the user on a visual image in the motion acquisition signal, and determining a visibility mark of the user motion key point on the visual image, wherein the visibility mark is used for representing whether the user motion key point is visible on the visual image or not;
and determining a normalization factor according to the standard action, and calculating the skeleton position feature similarity according to the normalization factor, the key point distance, the image area and the visibility mark.
3. The method of claim 2, wherein the calculation formula used for calculating the feature similarity of the skeleton position according to the normalization factor, the key point distance, the image area, and the visibility mark is:
OKSis the similarity of the position features of the skeleton,d i is the distance of the key point corresponding to the i-th key point of the user action, s is the square root of the image area,is a factor of the normalization that is described,v i is the visibility mark corresponding to the ith user action key point,/and the user action key point is the user action key point>Is a visibility calculation function, when +.>Returns 1 if it is, and returns 0 if it is not.
4. The method for evaluating the quality of exercise actions according to claim 1, wherein said feature extraction of said action acquisition signal to obtain a human skeleton position feature and a human action force feature comprises:
Inputting the visual image signals in the action acquisition signals into a preset convolution network for feature extraction to obtain image features, and inputting the image features into a pre-trained integral posture estimation network for posture analysis to obtain coordinates of key points of a human body;
inputting the coordinates of the human body key points into a pre-trained confidence level mapping network for confidence level analysis to obtain key point confidence levels, and determining affinity vectors among different human body key points according to the key point confidence levels;
clustering the human body key points according to the affinity vector, and assembling the clustered human body key points to obtain the human body skeleton position characteristics;
and determining a motion resistance change value according to the force touch signal in the motion acquisition signal, and determining the motion force characteristic according to the motion resistance change value.
5. The method for evaluating the quality of an exercise operation according to claim 1, wherein before determining the standard operation corresponding to the exercise operation in the standard exercise operation library, further comprises:
performing action collection on the body-building action of the appointed user to obtain a sample collection signal, and performing feature extraction on the sample collection signal to obtain a sample skeleton position feature and a sample action force feature;
Performing time sequence synchronization processing on the sample skeleton position features and the sample action force features to obtain a sample skeleton position sequence and a sample action force sequence;
and respectively carrying out bilateral filtering treatment on the sample skeleton position sequence and the sample action force sequence, and carrying out action segmentation on the sample skeleton position sequence and the sample action force sequence after bilateral filtering treatment to obtain the standard action.
6. A quality of exercise assessment system for performing the method of claim 1, the system comprising:
the feature extraction module is used for acquiring the body-building actions of the user to obtain action acquisition signals, and extracting features of the action acquisition signals to obtain the position features and the action force features of the human body framework;
the standard action determining module is used for determining standard actions corresponding to the body-building actions in the standard body-building action library and inquiring standard skeleton position characteristics and standard action force characteristics corresponding to the standard actions;
the feature comparison module is used for comparing the human skeleton position features with the standard skeleton position features to obtain skeleton position feature similarity, and comparing the human motion force features with the standard motion force features to obtain motion force feature similarity;
And the quality evaluation result generation module is used for generating a quality evaluation result of the body-building action according to the skeleton position characteristic similarity and the action force characteristic similarity.
7. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the method according to any of claims 1 to 5 when the computer program is executed.
8. A computer readable storage medium storing a computer program, characterized in that the computer program when executed by a processor implements the steps of the method according to any one of claims 1 to 5.
CN202110582037.XA 2021-05-27 2021-05-27 Body-building action quality assessment method, body-building action quality assessment system, terminal equipment and storage medium Active CN113239849B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110582037.XA CN113239849B (en) 2021-05-27 2021-05-27 Body-building action quality assessment method, body-building action quality assessment system, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110582037.XA CN113239849B (en) 2021-05-27 2021-05-27 Body-building action quality assessment method, body-building action quality assessment system, terminal equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113239849A CN113239849A (en) 2021-08-10
CN113239849B true CN113239849B (en) 2023-12-19

Family

ID=77139021

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110582037.XA Active CN113239849B (en) 2021-05-27 2021-05-27 Body-building action quality assessment method, body-building action quality assessment system, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113239849B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113925497B (en) * 2021-10-22 2023-09-15 吉林大学 Binocular vision measurement system-based automobile passenger riding posture extraction method
CN114550027A (en) * 2022-01-18 2022-05-27 清华大学 Vision-based motion video fine analysis method and device

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446757A (en) * 2016-05-20 2017-02-22 北京九艺同兴科技有限公司 Human body motion data similarity automatic evaluation method
CN108211309A (en) * 2017-05-25 2018-06-29 深圳市未来健身衣科技有限公司 The guidance method and device of body building
CN109558824A (en) * 2018-11-23 2019-04-02 卢伟涛 A kind of body-building movement monitoring and analysis system based on personnel's image recognition
CN110020630A (en) * 2019-04-11 2019-07-16 成都乐动信息技术有限公司 Method, apparatus, storage medium and the electronic equipment of assessment movement completeness
CN110633608A (en) * 2019-03-21 2019-12-31 广州中科凯泽科技有限公司 Human body limb similarity evaluation method of posture image
CN110782967A (en) * 2019-11-01 2020-02-11 成都乐动信息技术有限公司 Fitness action standard degree evaluation method and device
CN111476097A (en) * 2020-03-06 2020-07-31 平安科技(深圳)有限公司 Human body posture assessment method and device, computer equipment and storage medium
WO2021000708A1 (en) * 2019-07-04 2021-01-07 安徽华米信息科技有限公司 Fitness teaching method and apparatus, electronic device and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106446757A (en) * 2016-05-20 2017-02-22 北京九艺同兴科技有限公司 Human body motion data similarity automatic evaluation method
CN108211309A (en) * 2017-05-25 2018-06-29 深圳市未来健身衣科技有限公司 The guidance method and device of body building
CN109558824A (en) * 2018-11-23 2019-04-02 卢伟涛 A kind of body-building movement monitoring and analysis system based on personnel's image recognition
CN110633608A (en) * 2019-03-21 2019-12-31 广州中科凯泽科技有限公司 Human body limb similarity evaluation method of posture image
CN110020630A (en) * 2019-04-11 2019-07-16 成都乐动信息技术有限公司 Method, apparatus, storage medium and the electronic equipment of assessment movement completeness
WO2021000708A1 (en) * 2019-07-04 2021-01-07 安徽华米信息科技有限公司 Fitness teaching method and apparatus, electronic device and storage medium
CN110782967A (en) * 2019-11-01 2020-02-11 成都乐动信息技术有限公司 Fitness action standard degree evaluation method and device
CN111476097A (en) * 2020-03-06 2020-07-31 平安科技(深圳)有限公司 Human body posture assessment method and device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN113239849A (en) 2021-08-10

Similar Documents

Publication Publication Date Title
CN108256433B (en) Motion attitude assessment method and system
CN107784282B (en) Object attribute identification method, device and system
CN108205654B (en) Action detection method and device based on video
CN113239849B (en) Body-building action quality assessment method, body-building action quality assessment system, terminal equipment and storage medium
CN110555387B (en) Behavior identification method based on space-time volume of local joint point track in skeleton sequence
CN110688929B (en) Human skeleton joint point positioning method and device
CN113392742A (en) Abnormal action determination method and device, electronic equipment and storage medium
CN110633004B (en) Interaction method, device and system based on human body posture estimation
CN115497596B (en) Human body motion process posture correction method and system based on Internet of things
WO2017161734A1 (en) Correction of human body movements via television and motion-sensing accessory and system
CN112668359A (en) Motion recognition method, motion recognition device and electronic equipment
CN111914643A (en) Human body action recognition method based on skeleton key point detection
CN110659570A (en) Target object posture tracking method, and neural network training method and device
CN112418135A (en) Human behavior recognition method and device, computer equipment and readable storage medium
CN113392741A (en) Video clip extraction method and device, electronic equipment and storage medium
CN111898571A (en) Action recognition system and method
CN112633221A (en) Face direction detection method and related device
CN112200074A (en) Attitude comparison method and terminal
CN105844204B (en) Human behavior recognition method and device
KR20140043174A (en) Simulator for horse riding and method for simulation of horse riding
JP2016045884A (en) Pattern recognition device and pattern recognition method
Hachaj et al. Human actions recognition on multimedia hardware using angle-based and coordinate-based features and multivariate continuous hidden Markov model classifier
CN113239848B (en) Motion perception method, system, terminal equipment and storage medium
CN116343335A (en) Motion gesture correction method based on motion recognition
CN112257642B (en) Human body continuous motion similarity evaluation method and evaluation device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant