CN112990011A - Body-building action recognition and evaluation method based on machine vision and deep learning - Google Patents

Body-building action recognition and evaluation method based on machine vision and deep learning Download PDF

Info

Publication number
CN112990011A
CN112990011A CN202110274616.8A CN202110274616A CN112990011A CN 112990011 A CN112990011 A CN 112990011A CN 202110274616 A CN202110274616 A CN 202110274616A CN 112990011 A CN112990011 A CN 112990011A
Authority
CN
China
Prior art keywords
action
scoring
score
standard
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110274616.8A
Other languages
Chinese (zh)
Inventor
崔嘉亮
钟倩文
郑树彬
彭乐乐
文静
林湧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai University of Engineering Science
Original Assignee
Shanghai University of Engineering Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai University of Engineering Science filed Critical Shanghai University of Engineering Science
Priority to CN202110274616.8A priority Critical patent/CN112990011A/en
Publication of CN112990011A publication Critical patent/CN112990011A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/48Matching video sequences

Abstract

The invention relates to a body-building action recognition and evaluation method based on machine vision and deep learning, which comprises the following steps of: acquiring a body-building action video to be evaluated, and setting an action type; establishing a human body skeleton model of each frame of video image; extracting a frame of video image with the action closest to the standard action as an image to be evaluated; establishing a scoring rule standard for acquiring the action score; and calculating the distance characteristic and the angle characteristic of the human skeleton model in the image to be evaluated, and scoring the action in the image to be evaluated according to the scoring standard corresponding to the action type in the scoring standard. Compared with the prior art, the human body posture recognition method based on the multi-point interaction is characterized in that a human body posture recognition algorithm from bottom to top is adopted to recognize the body building action of the human body, the related action score is obtained, the accuracy and the efficiency of obtaining the score are improved, the wrong action can be effectively prompted, and the body building efficiency is improved.

Description

Body-building action recognition and evaluation method based on machine vision and deep learning
Technical Field
The invention relates to the field of machine vision and sports health, in particular to a fitness action recognition and evaluation method based on machine vision and deep learning.
Background
Along with the improvement of the attention on the health of people, more and more people improve the health of people through body building. The traditional fitness training method is to perform correction under the supervision and guidance of a coach, and requires the professional to conduct training under specific circumstances.
Due to the limitation of time and economic conditions, many people select home fitness, but due to the lack of real-time instruction and judgment of coaches, home fitness cannot achieve a good fitness effect, and even body injury caused by non-standard fitness actions easily occurs.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provide a fitness action recognition and evaluation method based on machine vision and deep learning.
The purpose of the invention can be realized by the following technical scheme:
a fitness action recognition and evaluation method based on machine vision and deep learning comprises the following steps:
s1: acquiring a body-building action video to be evaluated, and setting an action type;
s2: establishing a human body skeleton model of each frame of video image;
s3: extracting a frame of video image with the action closest to the standard action as an image to be evaluated;
s4: establishing a scoring rule standard for acquiring the action score;
s5: and calculating the distance characteristic and the angle characteristic of the human skeleton model in the image to be evaluated, and scoring the action in the image to be evaluated according to the scoring standard corresponding to the action type in the scoring standard.
Preferably, the action scoring criteria are:
Gm=(am,1×Wm,1+am,2×Wm,2+...+am,n×Wm,n)×100
wherein M is action type, M is within [1,2, …, M]M is the total number of action types, N is the type of scoring criteria, N belongs to [1,2, …, N]N is the number of scoring criteria corresponding to the action type, GmIs the motion score of the mth body-building motion, am,nThe evaluation score of the nth scoring criterion for the mth body-building action, Wm,nThe scoring weight of the nth scoring criterion of the mth fitness action.
Preferably, the evaluation score a of the nth scoring criterion of the mth body-building actionmnComprises the following steps:
Figure BDA0002976095220000021
wherein x ism,nCharacteristic data, x corresponding to the nth scoring standard of the mth body-building actionm,nEither as a distance feature or an angle feature,
Figure BDA0002976095220000022
to evaluate the auxiliary parameters, SfFail score, SpIs an excellent score.
Preferably, the step S2 specifically includes:
s21: selecting a COCO human body model, and acquiring bone key points of each frame of video image by adopting a CMU human body posture data set;
s22: and constructing a human skeleton map according to the skeleton key points.
Preferably, said skeletal key points comprise nose, neck, right shoulder, right elbow, right hand, left shoulder, left elbow, left hand, right crotch, right knee, right foot, left crotch, left knee, left foot, right eye, left eye, right ear and left ear.
Preferably, in step S22, a human skeleton map is constructed by using PAFs partial affinity domains, two-dimensional vectors of limb positions and limb directions are encoded, and then, skeletal key point connections are performed.
Preferably, the step S3 specifically includes: and matching the bone feature image of the standard action with the extracted bone feature image in the video frame by using the VGG-16 deep convolution neural network, skipping frames to extract the video frame of the body-building action video, and extracting a frame which is closest to the standard action in the video frame by using the VGG-16 deep convolution neural network.
Preferably, the action types include: push-up, narrow-distance push-up, wide-distance push-up, abdomen rolling, reverse abdomen rolling and leg lifting, back-up alternate leg lifting, flat plate support, A-shaped extension of bending over, W-shaped extension of bending over, back-up, deep squatting, hip bridge and wall leaning and sitting.
Preferably, the method further comprises step S6: and if the action score is less than 60, carrying out error position prompt on the action.
Preferably, the specific step of step S6 includes:
s61: judging whether the action score is less than 60 points, if so, entering a step S61, otherwise, outputting an evaluation score;
s62: obtaining an action score GmCorresponding to am,nAnd the score is the scoring standard type of the failing score, and the error action prompt corresponding to the scoring standard type is obtained.
Compared with the prior art, the invention has the following advantages:
(1) according to the invention, one frame of video closest to the standard action in the body-building video is extracted, the scoring standard matched with the body-building action is designed by calculating the distance and the joint angle between key parts of a human body and triggering from the motion mechanism of the action, so that the body-building video can be suitable for different body-building actions, the calculation precision is high, the scoring reference value is high, the applicability is wide, the body-building action can be scored accurately and efficiently, the action prompt is provided, the joint damage is avoided, and the body-building efficiency is improved;
(2) according to the method, the human body skeleton map is constructed by adopting the PAFs partial affinity domain, and the VGG-16 deep convolution neural network is adopted to obtain one frame which is closest to the standard action in the video frames, so that the problem that the human body skeleton map cannot be established due to the fact that the human body is not identified can be effectively avoided, the video frames for evaluating the fitness action score can be effectively extracted, and the scoring accuracy and the scoring efficiency of the fitness action are improved;
(3) the method can eliminate background factors, extract the skeleton image characteristics of the video frame through the convolution network, reduce the inaccuracy of image matching and improve the matching precision.
Drawings
Fig. 1 is a flow chart of a method for identifying and evaluating exercise motions.
FIG. 2 is a two-dimensional human bone;
FIG. 3 is a VGG-16 network selected by the key frame extraction module.
Detailed Description
The invention is described in detail below with reference to the figures and specific embodiments. Note that the following description of the embodiments is merely a substantial example, and the present invention is not intended to be limited to the application or the use thereof, and is not limited to the following embodiments.
Examples
A fitness action recognition and evaluation method based on machine vision and deep learning is disclosed, as shown in FIG. 1, and comprises the following steps:
s1: and acquiring a body-building action video to be evaluated, and setting an action type.
In this embodiment, the user may select to start the computer camera to shoot the exercise movement, or select to upload the exercise video corresponding to the movement, and the computer writes the exercise video into the corresponding file. The video format adopts MP4 format, an MJPEG encoder is adopted, the video frame rate is 30 frames, and after computer cutting, the size of the video picture is as follows: 640*480.
S2: and establishing a human skeleton model of each frame of video image.
Step S2 specifically includes:
s21: selecting a COCO human body model, and acquiring skeleton key points of each frame of video image by adopting a CMU human body posture data set, wherein the skeleton key points comprise a nose, a neck, a right shoulder, a right elbow, a right hand, a left shoulder, a left elbow, a left hand, a right crotch, a right knee, a right foot, a left crotch, a left knee, a left foot, a right eye, a left eye, a right ear and a left ear.
S22: and constructing a human skeleton map according to the skeleton key points.
In this embodiment, in step S22, a human skeleton map is constructed by using PAFs partial affinity domains, two-dimensional vectors of the limb position and the limb direction are encoded, and then, the connection of the skeletal key points is performed.
Taking a single arm as an example, xj1,kAnd xj2,kIs a limb j1Elbow and j2Two key points of the hand, belonging to the k-th individual's limb c, can be judged according to the following formula
Figure BDA0002976095220000041
v=(xj2,k-xj1,k)/xj2,k-xj1,k2
0≤v·(p-xj1,k)≤lc,k∩|v·(p-xj1,k)|≤σl
Width of limbs sigmalIs the distance between two pixels, the length of the limb
Figure BDA0002976095220000042
vIs the vector of the unit vector perpendicular to the direction of the limb. point p is the unit vector v in the direction perpendicular to the limbAnd the magnitude of the vector obtained by summing the vectors in the v direction, and judging that the point p is still on the limb, namely when the point p is smaller than the limb width and the limb length, the limb length can be determined as
Figure BDA0002976095220000043
And is formed by j1To j2If other connection points form vectors near the two points, the vectors are 0 values, and therefore the situation that key points of the limb which are not effectively connected are too close to cause wrong connection in the process of body building can be effectively avoided.
S3: extracting a frame of video image with the action closest to the standard action as an image to be evaluated;
the step S3 specifically includes: the VGG-16 deep convolution neural network shown in figure 3 is adopted to match video images, frame skipping is carried out to extract video frames of the body-building action video, the VGG-16 deep convolution neural network is utilized to match the bone feature map of the standard action with the bone feature map in the extracted video frames, and one frame closest to the standard action in the video frames is extracted.
Further, in this embodiment S3, human bone identification is performed on each extracted frame, and the extracted frame is placed in a blank background to avoid environmental interference.
S4: and establishing a scoring rule standard for acquiring the action score.
In step S4, a scoring criterion is established for each action, and each action corresponds to a plurality of scores.
The action scoring standard is as follows:
Gm=(am,1×Wm,1+am,2×Wm,2+...+am,n×Wm,n)×100
wherein M is action type, M is within [1,2, …, M]M is the total number of action types, N is the type of scoring criteria, N belongs to [1,2, …, N]N is the number of scoring criteria corresponding to the action type, GmIs the motion score of the mth body-building motion, am,nThe evaluation score of the nth scoring criterion for the mth body-building action, Wm,nThe scoring weight of the nth scoring criterion of the mth fitness action.
Further, the evaluation score a of the nth scoring criterion of the mth body-building actionm,nComprises the following steps:
Figure BDA0002976095220000051
wherein x ism,nCharacteristic data, x corresponding to the nth scoring standard of the mth body-building actionm,nEither as a distance feature or an angle feature,
Figure BDA0002976095220000052
to evaluate the auxiliary parameters, SfFail score, SpIs an excellent score.
S5: and calculating the distance characteristic and the angle characteristic of the human skeleton model in the image to be evaluated, and scoring the action in the image to be evaluated according to the scoring standard corresponding to the action type in the scoring standard.
In the present invention, the right side of the moving person is used to perform step S5, and according to the contents in fig. 2 and table 1, the distance features and angle features of the skeleton model of the human body in the image to be evaluated are calculated, as shown in fig. 2, wherein the human body parts represented by numerals 0-17 are 0 nose, 1 neck, 2 right shoulder, 3 right elbow, 4 right hand, 5 left shoulder, 6 left elbow, 7 left hand, 8 right crotch, 9 right knee, 10 right foot, 11 left crotch, 12 left knee, 13 left foot, 14 right eye, 15 left eye, 16 right ear, and 17 left ear, respectively. In table 1, the distance characteristic represents the distance between the human body parts represented by two numbers, such as 2-5, the distance between the left shoulder and the right shoulder, and the angle characteristic represents the size of an included angle formed by connecting the human body parts represented by three numbers in sequence, such as 2-3-4, the included angle between the right shoulder, the right elbow and the right hand, i.e., the right elbow bending angle.
TABLE 1 distance characteristic meaning Table
Figure BDA0002976095220000061
When the method is implemented, the distance characteristic and the angle characteristic are input into corresponding scoring standard calculation formulas to obtain corresponding action scores.
In this embodiment, the action types include: push-up, narrow-distance push-up, wide-distance push-up, abdomen rolling, reverse abdomen rolling and leg lifting, back-up alternate leg lifting, flat plate support, A-shaped extension of bending over, W-shaped extension of bending over, back-up, deep squatting, hip bridge and wall leaning and sitting.
As shown in table 2, the table is a table of identification numbers, scoring standards and weight matching corresponding to the types of actions, wherein the identification numbers 1-13 represent push-up, narrow-distance push-up, wide-distance push-up, abdomen rolling, reverse abdomen rolling and leg lifting, alternate leg lifting for lying on back, flat plate support, A-shaped extension for bending down, W-shaped extension for bending down, back support, deep squat, hip bridge and sitting on the wall, and each action corresponds to its own scoring standard and weight.
TABLE 2 action types and their corresponding identifiers, scoring criteria, and weight matching tables
Figure BDA0002976095220000071
Figure BDA0002976095220000081
Specifically, taking a push-up motion as an example, the motion type m thereof is 1, and the motion corresponds to three scoring criteria, wherein a1,1Corresponding characteristic data x1,1Angle feature 9-8-2, right leg back shoulder angle, for assessing if the torso is straight; a is1,2Corresponding characteristic data x1,2The absolute value of the difference between distance feature 2-5 and distance feature 4-7, the hand-to-shoulder distance, is used to assess whether the hand is placed at the chest location; a is1,3Corresponding characteristic data x1,3The evaluation weight corresponding to the three standards is W respectively for the angle characteristic 9-8-2 and the angle of the back and the shoulder of the right leg for evaluating whether the trunk is straight1,1=0.5、W1,2=0.2、W1,3=0.3。
For criterion 1, the evaluation scores were:
Figure BDA0002976095220000082
wherein the content of the first and second substances,
Figure BDA0002976095220000083
165, 175, 185, 195, respectively, in °;
for criterion 2, the evaluation scores were:
Figure BDA0002976095220000084
wherein the content of the first and second substances,
Figure BDA0002976095220000085
respectively-12, 0, 12, 24, in cm;
for criterion 3, the evaluation scores were:
Figure BDA0002976095220000091
wherein the content of the first and second substances,
Figure BDA0002976095220000092
10, 25, 40, 55, respectively, in units.
In particular, in this embodiment, the identified angular feature 12-11-5 is 172 °, i.e., the left leg back-shoulder angle is 172 °, so x1,1Calculated a at 172 deg.1,10.7; distance features 2-5 are identified as 42 and distance features 4-7 as 55, thus x1,2Calculated as a, 55-42| ═ 131,20.83; the identified angular feature 9-8-2 is 62 deg., so x1,3Calculated a at 62 °1,2=SfIn this embodiment, Sf=0.45。
Thus the get action score is:
G1=(a1,1×W1,1+a1,2×W1,2+a1,3×W1,3)×100=(0.7×0.5+0.83×0.2+0.45×0.3)×100=65.1
example 2
In this embodiment, the present invention further includes step S6: and if the action score is less than 60, carrying out error position prompt on the action.
The specific steps of step S6 include:
s61: and judging whether the action score is less than 60 points, if so, entering the step S61, and otherwise, outputting the evaluation score.
S62: obtaining an evaluation score GmCorresponding to am,nAnd the score is the scoring standard type of the failing score, and the error action prompt corresponding to the scoring standard type is obtained.
Taking the push-up motion as an example, in example 1, the motion score obtained is 65.1, and when it is 60 or more, the output is obtained.
Take a flat plate support as an example, which
Figure BDA0002976095220000093
Respectively 70, 80, 90 and 100,
Figure BDA0002976095220000094
Figure BDA0002976095220000095
respectively 150, 165, 180 and 195,
Figure BDA0002976095220000096
165, 175, 185, 195 respectively, the identified angular features 2-3-4 are 67 °, x7,1Calculated a, 67 deg.7,10.72; the identification angle feature 16-2-8 is 152 °, i.e., x7,2Calculated a, 152 deg.7,20.51; the identification angle characteristic 9-8-2 is 161 degrees, namely x7,3Calculate a as 161 °, deg7,3=SfIn this embodiment, Sf=0.45,W7,1、W7,2、W7,30.2, 0.4, respectively, so the motion scores of the flat panel support in this embodiment are:
G7=(a7,1×W7,1+a7,2×W7,2+a7,3×W7,3)×100=(0.72×0.2+0.51×0.4+0.45×0.4)×100=52.8
the acquired action score is less than 60 points, and the process proceeds to step S62;
s62 evaluation score G7A in (1) corresponds to7,3For a failing score, the corresponding false action prompt is: the angular feature 9-8-2, the right leg shoulder angle, is not standard.
The false action prompts of the evaluation criteria of other action types are respectively the characteristic of prompt action is not standard.
The above embodiments are merely examples and do not limit the scope of the present invention. These embodiments may be implemented in other various manners, and various omissions, substitutions, and changes may be made without departing from the technical spirit of the present invention.

Claims (10)

1. A body-building action recognition and evaluation method based on machine vision and deep learning is characterized by comprising the following steps:
s1: acquiring a body-building action video to be evaluated, and setting an action type;
s2: establishing a human body skeleton model of each frame of video image;
s3: extracting a frame of video image with the action closest to the standard action as an image to be evaluated;
s4: establishing a scoring rule standard for acquiring the action score;
s5: and calculating the distance characteristic and the angle characteristic of the human skeleton model in the image to be evaluated, and scoring the action in the image to be evaluated according to the scoring standard corresponding to the action type in the scoring standard.
2. A fitness motion recognition and assessment method based on machine vision and deep learning according to claim 1, wherein the motion scoring criteria are:
Gm=(am,1×Wm,1+am,2×Wm,2+...+am,n×Wm,n)×100
wherein M is action type, M is within [1,2, …, M]M is the total number of action types, N is the type of scoring criteria, N belongs to [1,2, …, N]N is the number of scoring criteria corresponding to the action type, GmIs the motion score of the mth body-building motion, am,nThe evaluation score of the nth scoring criterion for the mth body-building action, Wm,nThe scoring weight of the nth scoring criterion of the mth fitness action.
3. A method as claimed in claim 2, wherein the evaluation score a of the nth scoring criterion of the mth exercise motion is the evaluation score a of the mth exercise motionmnComprises the following steps:
Figure FDA0002976095210000011
wherein x ism,nCharacteristic data, x corresponding to the nth scoring standard of the mth body-building actionm,nEither as a distance feature or an angle feature,
Figure FDA0002976095210000012
to evaluate the auxiliary parameters, SfFail score, SpIs an excellent score.
4. A method for identifying and evaluating exercise motions based on machine vision and deep learning as claimed in claim 1, wherein the step S2 specifically comprises:
s21: selecting a COCO human body model, and acquiring bone key points of each frame of video image by adopting a CMU human body posture data set;
s22: and constructing a human skeleton map according to the skeleton key points.
5. A method as claimed in claim 4, wherein the skeletal key points include nose, neck, right shoulder, right elbow, right hand, left shoulder, left elbow, left hand, right crotch, right knee, right foot, left crotch, left knee, left foot, left eye, right ear and left ear.
6. A fitness motion recognition and evaluation method based on machine vision and deep learning according to claim 4, wherein in step S22, a human skeleton map is constructed by using PAFs partial affinity domains, two-dimensional vectors of limb positions and limb directions are encoded, and then, connection of skeletal key points is performed.
7. A method for identifying and evaluating exercise motions based on machine vision and deep learning as claimed in claim 1, wherein the step S3 specifically comprises: and matching the bone feature image of the standard action with the extracted bone feature image in the video frame by using the VGG-16 deep convolution neural network, skipping frames to extract the video frame of the body-building action video, and extracting a frame which is closest to the standard action in the video frame by using the VGG-16 deep convolution neural network.
8. A method according to claim 1, wherein the motion types include: push-up, narrow-distance push-up, wide-distance push-up, abdomen rolling, reverse abdomen rolling and leg lifting, back-up alternate leg lifting, flat plate support, A-shaped extension of bending over, W-shaped extension of bending over, back-up, deep squatting, hip bridge and wall leaning and sitting.
9. A method for identifying and evaluating exercise motions based on machine vision and deep learning as claimed in claim 2, further comprising step S6: and if the action score is less than 60, carrying out error position prompt on the action.
10. A method for identifying and evaluating exercise motions based on machine vision and deep learning as claimed in claim 9, wherein the specific steps of step S6 include:
s61: judging whether the action score is less than 60 points, if so, entering a step S61, otherwise, outputting an evaluation score;
s62: obtaining an action score GmCorresponding to am,nAnd the score is the scoring standard type of the failing score, and the error action prompt corresponding to the scoring standard type is obtained.
CN202110274616.8A 2021-03-15 2021-03-15 Body-building action recognition and evaluation method based on machine vision and deep learning Pending CN112990011A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110274616.8A CN112990011A (en) 2021-03-15 2021-03-15 Body-building action recognition and evaluation method based on machine vision and deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110274616.8A CN112990011A (en) 2021-03-15 2021-03-15 Body-building action recognition and evaluation method based on machine vision and deep learning

Publications (1)

Publication Number Publication Date
CN112990011A true CN112990011A (en) 2021-06-18

Family

ID=76335468

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110274616.8A Pending CN112990011A (en) 2021-03-15 2021-03-15 Body-building action recognition and evaluation method based on machine vision and deep learning

Country Status (1)

Country Link
CN (1) CN112990011A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505662A (en) * 2021-06-23 2021-10-15 广州大学 Fitness guidance method, device and storage medium
CN113892928A (en) * 2021-10-14 2022-01-07 首都体育学院 Body-building behavior monitoring system based on Beidou positioning and narrowband Internet of things
CN115880774A (en) * 2022-12-01 2023-03-31 湖南工商大学 Body-building action recognition method and device based on human body posture estimation and related equipment

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108734104A (en) * 2018-04-20 2018-11-02 杭州易舞科技有限公司 Body-building action error correction method based on deep learning image recognition and system
CN109558824A (en) * 2018-11-23 2019-04-02 卢伟涛 A kind of body-building movement monitoring and analysis system based on personnel's image recognition
CN109829442A (en) * 2019-02-22 2019-05-31 焦点科技股份有限公司 A kind of method and system of the human action scoring based on camera
CN110321754A (en) * 2018-03-28 2019-10-11 西安铭宇信息科技有限公司 A kind of human motion posture correcting method based on computer vision and system
CN110796077A (en) * 2019-10-29 2020-02-14 湖北民族大学 Attitude motion real-time detection and correction method
CN111199558A (en) * 2019-12-25 2020-05-26 北京自行者科技有限公司 Image matching method based on deep learning
CN111652078A (en) * 2020-05-11 2020-09-11 浙江大学 Yoga action guidance system and method based on computer vision
CN111931804A (en) * 2020-06-18 2020-11-13 南京信息工程大学 RGBD camera-based automatic human body motion scoring method

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110321754A (en) * 2018-03-28 2019-10-11 西安铭宇信息科技有限公司 A kind of human motion posture correcting method based on computer vision and system
CN108734104A (en) * 2018-04-20 2018-11-02 杭州易舞科技有限公司 Body-building action error correction method based on deep learning image recognition and system
CN109558824A (en) * 2018-11-23 2019-04-02 卢伟涛 A kind of body-building movement monitoring and analysis system based on personnel's image recognition
CN109829442A (en) * 2019-02-22 2019-05-31 焦点科技股份有限公司 A kind of method and system of the human action scoring based on camera
CN110796077A (en) * 2019-10-29 2020-02-14 湖北民族大学 Attitude motion real-time detection and correction method
CN111199558A (en) * 2019-12-25 2020-05-26 北京自行者科技有限公司 Image matching method based on deep learning
CN111652078A (en) * 2020-05-11 2020-09-11 浙江大学 Yoga action guidance system and method based on computer vision
CN111931804A (en) * 2020-06-18 2020-11-13 南京信息工程大学 RGBD camera-based automatic human body motion scoring method

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113505662A (en) * 2021-06-23 2021-10-15 广州大学 Fitness guidance method, device and storage medium
CN113505662B (en) * 2021-06-23 2024-03-01 广州大学 Body-building guiding method, device and storage medium
CN113892928A (en) * 2021-10-14 2022-01-07 首都体育学院 Body-building behavior monitoring system based on Beidou positioning and narrowband Internet of things
CN115880774A (en) * 2022-12-01 2023-03-31 湖南工商大学 Body-building action recognition method and device based on human body posture estimation and related equipment

Similar Documents

Publication Publication Date Title
CN112990011A (en) Body-building action recognition and evaluation method based on machine vision and deep learning
CN108734104B (en) Body-building action error correction method and system based on deep learning image recognition
CN111144217B (en) Motion evaluation method based on human body three-dimensional joint point detection
CN105512621B (en) A kind of shuttlecock action director's system based on Kinect
CN109919034A (en) A kind of identification of limb action with correct auxiliary training system and method
CN104573665B (en) A kind of continuous action recognition methods based on improvement viterbi algorithm
CN103678859B (en) Motion comparison method and motion comparison system
WO2014042121A1 (en) Movement evaluation device and program therefor
CN110448870B (en) Human body posture training method
CN108721870B (en) Exercise training evaluation method based on virtual environment
CN111652078A (en) Yoga action guidance system and method based on computer vision
CN110210284A (en) A kind of human body attitude behavior intelligent Evaluation method
WO2017161734A1 (en) Correction of human body movements via television and motion-sensing accessory and system
CN114099234B (en) Intelligent rehabilitation robot data processing method and system for assisting rehabilitation training
CN112749684A (en) Cardiopulmonary resuscitation training and evaluating method, device, equipment and storage medium
CN112364694A (en) Human body sitting posture identification method based on key point detection
CN115482580A (en) Multi-person evaluation system based on machine vision skeletal tracking technology
Wang et al. Motion analysis of deadlift for trainers with different levels based on body sensor network
CN115661930A (en) Action scoring method and device, action scoring equipment and storage medium
KR102013705B1 (en) Apparatus and method for recognizing user's posture in horse-riding simulator
CN111091889A (en) Human body form detection method based on mirror surface display, storage medium and device
CN113663312B (en) Micro-inertia-based non-apparatus body-building action quality evaluation method
CN112818800A (en) Physical exercise evaluation method and system based on human skeleton point depth image
CN114360052A (en) Intelligent somatosensory coach system based on AlphaPose and joint point angle matching algorithm
CN115006822A (en) Intelligent fitness mirror control system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20210618

RJ01 Rejection of invention patent application after publication