CN111639605A - Human body action scoring method based on machine vision - Google Patents

Human body action scoring method based on machine vision Download PDF

Info

Publication number
CN111639605A
CN111639605A CN202010485658.1A CN202010485658A CN111639605A CN 111639605 A CN111639605 A CN 111639605A CN 202010485658 A CN202010485658 A CN 202010485658A CN 111639605 A CN111639605 A CN 111639605A
Authority
CN
China
Prior art keywords
action
score
feature
actual image
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010485658.1A
Other languages
Chinese (zh)
Other versions
CN111639605B (en
Inventor
石晓冬
张红旗
杨德战
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shadow Lake Culture Beijing Co ltd
Original Assignee
Shadow Lake Culture Beijing Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shadow Lake Culture Beijing Co ltd filed Critical Shadow Lake Culture Beijing Co ltd
Priority to CN202010485658.1A priority Critical patent/CN111639605B/en
Publication of CN111639605A publication Critical patent/CN111639605A/en
Application granted granted Critical
Publication of CN111639605B publication Critical patent/CN111639605B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Psychiatry (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a human body action scoring method based on machine vision, which comprises the following steps: acquiring an action characteristic scheme according to the action category selected by the user, and initially correcting the image acquisition angle and position; acquiring an actual image of the user action, and extracting corresponding key feature points in the actual image according to the action feature scheme, wherein the key feature points comprise feature points of the head, the neck, the shoulder, the chest, the abdomen, the elbow, the wrist, the hand, the hip, the knee, the bare foot, the foot or the prop; connecting corresponding key feature points according to the action feature scheme to generate feature connecting lines, wherein the feature connecting lines comprise feature connecting lines of heads and necks, shoulders, chests and abdomens, upper arms, lower arms, thighs, shanks, feet or hands and props; and determining the human body action score according to the matching degree of the corresponding characteristic connecting lines in the actual image and the standard image. The invention has high reliability and usability, and improves the use experience of users.

Description

Human body action scoring method based on machine vision
Technical Field
The invention relates to the field of automatic detection devices, in particular to a human body action scoring method based on machine vision.
Background
With the improvement of living standard of people, sports activities such as dance, martial arts or body building gradually become a choice for people to build up body and enrich amateur life, however, due to the consideration of the attractiveness, high efficiency and safety of actions, the standardization of human body actions is particularly important, the current human body action evaluation method is single in application, and the guiding requirements of different groups of people on different sports activities cannot be met.
Meanwhile, because the existing human body action evaluation method is usually based on two-dimensional pictures, the standard degree of human body activity in a three-dimensional space cannot be objectively and accurately reflected, taking barbell horizontal pushing as an example, the horizontal positions of the barbell at the highest point and the lowest point, the holding width of both hands, the postures of the waist and the chest, and the distance between the elbow and the trunk all influence the efficiency and the safety of the action, and the action guidance only depending on a two-dimensional plane not only influences the reliability of the action, but also easily brings potential safety hazards.
Disclosure of Invention
In order to overcome the problems of narrow application range, low reliability and potential safety hazard of the existing human motion evaluation method, the embodiment of the invention provides a human motion scoring method based on machine vision, which comprises the following steps:
acquiring an action characteristic scheme according to the action category selected by the user, and initially correcting the image acquisition angle and position;
acquiring an actual image of the user action, and extracting corresponding key feature points in the actual image according to the action feature scheme, wherein the key feature points comprise feature points of the head, the neck, the shoulder, the chest, the abdomen, the elbow, the wrist, the hand, the hip, the knee, the bare foot, the foot or the prop;
connecting corresponding key feature points according to the action feature scheme to generate feature connecting lines, wherein the feature connecting lines comprise feature connecting lines of heads and necks, shoulders, chests and abdomens, upper arms, lower arms, thighs, shanks, feet or hands and props;
and determining the human body action score according to the matching degree of the corresponding characteristic connecting lines in the actual image and the standard image.
Further, the step of determining the human body action score according to the matching degree of the corresponding feature connecting line in the actual image and the standard image comprises:
generating a first data set according to the included angle value of the corresponding characteristic connecting line in the actual image;
generating a second data set according to the included angle value of the corresponding characteristic connecting line in the standard image;
and determining the human body action score according to the difference value of the corresponding included angle values in the first data set and the second data set.
Further, the step of determining the score of the human body action according to the difference value of the corresponding included angle values in the first data set and the second data set, wherein the actual image is a continuous time actual image, and the step of determining the score of the human body action comprises the following steps:
determining a key frame and a transition frame in an actual image;
generating a first score according to the included angle value difference of the corresponding key frames in the first data set and the second data set S1;
generating a second score S2 according to the included angle value difference of the corresponding transition frames in the first data set and the second data set;
generating a third score S3 according to the number of corresponding key frames and transition frames in the actual image and the standard image;
according to the formula
Figure BDA0002519083060000031
Determining a human motion score S, wherein D1 is a weight of the first score S1, D2 is a weight of the second score S2, and D3 is a weight of the third score S3.
Further, the step of obtaining the motion characteristic scheme according to the motion category selected by the user and initially correcting the image acquisition angle and position includes:
generating an action characteristic scheme corresponding to the action category according to the action category selected by the user;
acquiring a shooting specification and an action specification corresponding to the action characteristic scheme;
when the angle and the position of the camera meet the shooting specification, sending out first prompt information;
when the initial action of the user accords with the action specification, sending out second prompt information; the first prompt message and the second prompt message comprise voice messages, image messages or light messages.
Further, after the step of determining the human body action score according to the matching degree of the corresponding feature connecting line in the actual image and the standard image, the method further comprises the following steps:
displaying the actual image with the lowest score and the standard image of the corresponding frame;
and connecting the features with the maximum highlight angle difference in the actual image.
Further, the above-mentioned motion categories include martial arts motions, dance motions, sporting events, and fitness motions.
Further, above-mentioned stage property includes rifle, sword, rod, fan, dumbbell and barbell, the characteristic line of stage property is the axis of stage property:
according to the embodiment of the invention, various action categories are preset, each action category corresponds to a unique action characteristic scheme, and each action characteristic scheme corresponds to a unique image acquisition angle and position, key characteristic points and characteristic connecting lines, so that the embodiment of the invention can set customized, efficient and reliable guidance schemes for different physical activities, and the application range of the human action evaluation method is greatly widened; meanwhile, the embodiment of the invention can generate action characteristic schemes of a plurality of visual angles (visual angles in the horizontal x direction, the horizontal y direction and the vertical z direction) for the same action type, and each action characteristic scheme is independent but all action characteristic schemes can evaluate the normative of the same sports action from a three-dimensional visual angle, so that a user can obtain normative guidance to the maximum extent. In addition, different from the traditional judgment mode that adopts key feature points, this embodiment scores through the matching degree that actual image and standard image correspond the characteristic line, has not only reduced the data operand, also can make the score more visualization, is convenient for provide the later stage action correction scheme for the user, has promoted ease of use and user's use experience.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a flowchart of a human body motion scoring method according to a first embodiment of the present invention;
fig. 2 is a flowchart of a human body motion scoring method according to a second embodiment of the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects solved by the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
When embodiments of the present invention refer to the ordinal numbers "first", "second" (if present), etc., it is to be understood that the words are merely used for distinguishing between them unless they literally indicate the order in which they are used.
In the description of the present invention, it is to be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected" (if present) are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
The first embodiment:
referring to fig. 1, an embodiment of the invention discloses a human body action scoring method based on machine vision, which comprises S101-S104, wherein,
s101, acquiring an action characteristic scheme according to the action category selected by the user, and initially correcting the image acquisition angle and position.
In this embodiment, the user needs to select from a category of movements including, but not limited to, martial arts movements, dance movements, sporting events, and fitness movements. The sporting event action may include running, long jump, high jump, and the like. And when the action category is selected, generating an action characteristic scheme corresponding to the action category. In the embodiment of the invention, the same action can have action characteristic schemes of a plurality of visual angles, and the barbell-pushing example comprises action characteristic schemes of three visual angles in total, namely two horizontal visual angles which are perpendicular to each other and one vertical visual angle. The user can select each action characteristic scheme in turn by changing the collection angle and the position of the camera, and can also select each action characteristic scheme simultaneously by adopting the cameras at three different positions respectively so as to shorten the scoring time. In this step, the motion characteristic scheme can be used for correcting the angle, position and the like of the camera, so as to improve the accuracy of human motion scoring.
S102, obtaining an actual image of the user action, and extracting corresponding key feature points in the actual image according to the action feature scheme, wherein the key feature points comprise feature points of the head, the neck, the shoulder, the chest, the abdomen, the elbow, the wrist, the hand, the hip, the knee, the bare foot, the foot or the prop.
In this embodiment, the prop includes a gun, a sword, a knife, a stick, a fan, a dumbbell, and a barbell, and a feature connecting line of the prop is a central axis of the prop.
In this step, the motion feature scheme may be used to extract corresponding key feature points, and the key feature points are used as a basis for determining whether the corresponding motion category is standard. The key feature points corresponding to different action feature schemes are different, illustratively, when the barbell is used for lying and lifting, the key feature points of the knee, the foot and the like do not need to be collected, and the prop is determined to be the barbell.
And S103, connecting corresponding key feature points according to the motion feature scheme to generate feature connecting lines, wherein the feature connecting lines comprise feature connecting lines of the head and neck, the neck and shoulder, the chest and abdomen, the upper arm, the lower arm, the thigh, the calf, the feet or the hands and the props.
The characteristic connection line is a connection line of key characteristic points associated with the motion characteristic scheme, similarly, the key characteristic points corresponding to different motion characteristic schemes are different, taking barbell lying and pushing as an example, in the motion characteristic scheme applicable to the horizontal direction, the characteristic connection lines of the shoulder, the neck, the chest, the abdomen, the upper arm, the lower arm and the prop are main acquisition objects, and in the motion characteristic scheme applicable to the vertical direction, the characteristic connection lines of the neck, the shoulder, the upper arm, the lower arm, the hand and the prop are main acquisition objects.
And S104, determining the score of the human body action according to the matching degree of the corresponding characteristic connecting lines in the actual image and the standard image.
The standard degree of the user action can be determined and scored by comparing the matching degree of the corresponding feature lines in the actual image and the standard image, and as a preferred scheme, the matching degree comprises the inclination angle of the individual important feature line (such as the feature line of the chest and abdomen), the position relation of the associated feature line and the included angle value of the corresponding feature line. As an example and not by way of limitation, in this embodiment, in order to improve the recognition efficiency, the S104 further includes, with an included angle value of the feature line as a main criterion of the matching degree:
and S1041, generating a first data set according to the included angle value of the corresponding characteristic connecting line in the actual image.
And S1042, generating a second data set according to the included angle value of the corresponding characteristic connecting line in the standard image.
And S1043, determining a human body action score according to the difference value of the corresponding included angle values in the first data set and the second data set.
The step determines the action score by comparing the difference value of the corresponding included angle in the first data set and the second data set, and has the advantages of quick and simple identification, intuition and easy quantification.
According to the embodiment of the invention, various action categories are preset, each action category corresponds to a unique action characteristic scheme, and each action characteristic scheme corresponds to a unique image acquisition angle and position, key characteristic points and characteristic connecting lines, so that the embodiment of the invention can set customized, efficient and reliable guidance schemes for different physical activities, and the application range of the human action evaluation method is greatly widened; meanwhile, the embodiment of the invention can generate action characteristic schemes of a plurality of visual angles (visual angles in the horizontal x direction, the horizontal y direction and the vertical z direction) for the same action type, and each action characteristic scheme is independent but all action characteristic schemes can evaluate the normative of the same sports action from a three-dimensional visual angle, so that a user can obtain normative guidance to the maximum extent. In addition, different from the traditional judgment mode that adopts key feature points, this embodiment scores through the matching degree that actual image and standard image correspond the characteristic line, has not only reduced the data operand, also can make the score more visualization, is convenient for provide the later stage action correction scheme for the user, has promoted ease of use and user's use experience.
Second embodiment:
referring to fig. 2, an embodiment of the present invention discloses a human body motion scoring method based on machine vision, including S201-S215, wherein,
s201, generating an action characteristic scheme corresponding to the action category according to the action category selected by the user.
Similarly, the same action category in this embodiment may correspond to multiple action feature schemes.
And S202, acquiring the shooting specification and the action specification corresponding to the action characteristic scheme.
In the embodiment, the shooting specifications include camera angle, horizontal position, height and the like, and the action specifications are used for guiding standard initial actions of the user so as to improve the accuracy of scoring.
And S203, when the angle and the position of the camera meet the shooting specification, sending out first prompt information.
S204, when the initial action of the user accords with the action specification, sending out second prompt information; the first prompt message and the second prompt message comprise voice messages, image messages or light messages.
S205, acquiring an actual image of the user action, and extracting corresponding key feature points in the actual image according to the action feature scheme, wherein the key feature points comprise feature points of the head, the neck, the shoulder, the chest, the abdomen, the elbow, the wrist, the hand, the hip, the knee, the bare foot, the foot or the prop.
In this embodiment, the actual image is a continuous-time actual image. Other features are the same as those of the corresponding steps of the first embodiment, and are not described herein again.
And S206, connecting corresponding key feature points according to the motion feature scheme to generate feature connecting lines, wherein the feature connecting lines comprise feature connecting lines of the head and neck, the neck and shoulder, the chest and abdomen, the upper arm, the lower arm, the thigh, the calf, the feet or the hands and the props.
And S207, generating a first data set according to the included angle value of the corresponding characteristic connecting line in the actual image.
And S208, generating a second data set according to the included angle value of the corresponding characteristic connecting line in the standard image.
S206-S208 are the same as the corresponding steps of the first embodiment and are not described again here.
S209, the key frame and the transition frame in the actual image are determined.
In this embodiment, the key frame is generally an image corresponding to a pause, and the transition frame is an image corresponding to a transition between two adjacent pauses. Key frames tend to evaluate the degree of normative of an action, while transition frames tend to evaluate the accuracy of the action's exertion.
And S210, generating a first score according to the included angle value difference of the corresponding key frames in the first data set and the second data set S1.
The first score of this step S1 is the score of all the quiescing actions of the user.
And S211, generating a second score according to the included angle value difference of the corresponding transition frames in the first data set and the second data set S2.
The second score of this step S2 is the score of all transitional actions of the user.
S212, a third score is generated S3 according to the number of corresponding key frames and transition frames in the actual image and the standard image.
The number of the key frames and the transition frames is used to measure the duration of the pause motion and the transition motion, and the third scoring S3 of this step is to score the rhythm sense of the user when moving.
S213, according to the formula
Figure BDA0002519083060000091
Determining a human motion score S, wherein D1 is the firstA weight of the score S1, D2 is a weight of the second score S2, and D3 is a weight of the third score S3.
In this step, weight values are added to S1, S2, and S3, respectively, to more accurately measure the standard degree of the corresponding action.
And S214, displaying the actual image with the lowest score and the standard image of the corresponding frame.
And S215, connecting the features with the maximum highlight angle difference in the actual image.
The scores in S214-S215 are scores of the key frames or transition frames, and the specific generation manner may refer to S1041-S1043 in the first embodiment, and the user may be facilitated by highlighting the corresponding feature line in the actual image with the lowest score and displaying the difference between the feature line and the standard angle, and by visually presenting the difference to the user.
The embodiment is a scoring method based on continuous actions, and adopts action specifications corresponding to action categories to guide the initial actions of user standards so as to improve the scoring accuracy; by introducing the characteristics of the key frames, the transition frames, the number of the key frames and the transition frames and the like, the invention has a multi-dimensional evaluation standard for continuous actions, improves the reliability of the invention, and ensures the customization and accuracy of different action categories by the weight coefficient. The use experience of the user is improved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (7)

1. A human body action scoring method based on machine vision is characterized by comprising the following steps:
acquiring an action characteristic scheme according to the action category selected by the user, and initially correcting the image acquisition angle and position;
acquiring an actual image of the user action, and extracting corresponding key feature points in the actual image according to the action feature scheme, wherein the key feature points comprise feature points of the head, the neck, the shoulder, the chest, the abdomen, the elbow, the wrist, the hand, the hip, the knee, the bare foot, the foot or the prop;
connecting corresponding key feature points according to the action feature scheme to generate feature connecting lines, wherein the feature connecting lines comprise feature connecting lines of heads and necks, shoulders, chests and abdomens, upper arms, lower arms, thighs, shanks, feet or hands and props;
and determining the human body action score according to the matching degree of the corresponding characteristic connecting lines in the actual image and the standard image.
2. The human motion scoring method according to claim 1, wherein the step of determining the human motion score according to the matching degree of the corresponding feature lines in the actual image and the standard image comprises:
generating a first data set according to the included angle value of the corresponding characteristic connecting line in the actual image;
generating a second data set according to the included angle value of the corresponding characteristic connecting line in the standard image;
and determining the human body action score according to the difference value of the corresponding included angle values in the first data set and the second data set.
3. The human motion scoring method according to claim 2, wherein the actual image is a continuous time actual image, and the step of determining the human motion score according to the difference between the corresponding included angle values in the first data set and the second data set comprises:
determining a key frame and a transition frame in an actual image;
generating a first score according to the included angle value difference of the corresponding key frames in the first data set and the second data set S1;
generating a second score S2 according to the included angle value difference of the corresponding transition frames in the first data set and the second data set;
generating a third score S3 according to the number of corresponding key frames and transition frames in the actual image and the standard image;
according to the formula
Figure FDA0002519083050000021
Determining a human motion score S, wherein D1 is a weight of the first score S1, D2 is a weight of the second score S2, and D3 is a weight of the third score S3.
4. The human motion scoring method according to claim 3, wherein the step of obtaining a motion feature scheme according to the motion category selected by the user and initially correcting the image capturing angle and position comprises:
generating an action characteristic scheme corresponding to the action category according to the action category selected by the user;
acquiring a shooting specification and an action specification corresponding to the action characteristic scheme;
when the angle and the position of the camera meet the shooting specification, sending out first prompt information;
when the initial action of the user accords with the action specification, sending out second prompt information; the first prompt message and the second prompt message comprise voice messages, image messages or light messages.
5. The human motion scoring method according to claim 4, wherein after the step of determining the human motion score according to the matching degree of the corresponding feature lines in the actual image and the standard image, the method further comprises:
displaying the actual image with the lowest score and the standard image of the corresponding frame;
and connecting the features with the maximum highlight angle difference in the actual image.
6. The human motion scoring method according to any one of claims 1 to 5, wherein the motion categories include martial arts motions, dance motions, sporting event motions and fitness motions.
7. The human motion scoring method according to any one of claims 1 to 5, wherein the props comprise guns, swords, knives, sticks, fans, dumbbells and barbells, and the connecting lines of the features of the props are central axes of the props.
CN202010485658.1A 2020-06-01 2020-06-01 Human body action scoring method based on machine vision Active CN111639605B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010485658.1A CN111639605B (en) 2020-06-01 2020-06-01 Human body action scoring method based on machine vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010485658.1A CN111639605B (en) 2020-06-01 2020-06-01 Human body action scoring method based on machine vision

Publications (2)

Publication Number Publication Date
CN111639605A true CN111639605A (en) 2020-09-08
CN111639605B CN111639605B (en) 2024-04-26

Family

ID=72330621

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010485658.1A Active CN111639605B (en) 2020-06-01 2020-06-01 Human body action scoring method based on machine vision

Country Status (1)

Country Link
CN (1) CN111639605B (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107050774A (en) * 2017-05-17 2017-08-18 上海电机学院 A kind of body-building action error correction system and method based on action collection
US10096125B1 (en) * 2017-04-07 2018-10-09 Adobe Systems Incorporated Forecasting multiple poses based on a graphical image
CN110448870A (en) * 2019-08-16 2019-11-15 深圳特蓝图科技有限公司 A kind of human body attitude training method
CN110866417A (en) * 2018-08-27 2020-03-06 阿里巴巴集团控股有限公司 Image processing method and device and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10096125B1 (en) * 2017-04-07 2018-10-09 Adobe Systems Incorporated Forecasting multiple poses based on a graphical image
CN107050774A (en) * 2017-05-17 2017-08-18 上海电机学院 A kind of body-building action error correction system and method based on action collection
CN110866417A (en) * 2018-08-27 2020-03-06 阿里巴巴集团控股有限公司 Image processing method and device and electronic equipment
CN110448870A (en) * 2019-08-16 2019-11-15 深圳特蓝图科技有限公司 A kind of human body attitude training method

Also Published As

Publication number Publication date
CN111639605B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
US11745055B2 (en) Method and system for monitoring and feed-backing on execution of physical exercise routines
KR102700604B1 (en) Exercise program recommendation system according to physical ability
KR101959079B1 (en) Method for measuring and evaluating body performance of user
CN108597578A (en) A kind of human motion appraisal procedure based on two-dimensional framework sequence
CN110298218B (en) Interactive fitness device and interactive fitness system
KR102320960B1 (en) Personalized home training behavior guidance and correction system
US20160038088A1 (en) Systems and devices for measuring, capturing, and modifying partial and full body kinematics
JP7126812B2 (en) Detection device, detection system, image processing device, detection method, image processing program, image display method, and image display system
KR102238085B1 (en) Device and method for analyzing motion
CN110428486B (en) Virtual interaction fitness method, electronic equipment and storage medium
JP2013111449A (en) Video generating apparatus and method, and program
CN113856186A (en) Pull-up action judging and counting method, system and device
CN113262459B (en) Method, apparatus and medium for determining motion standard of sport body-building mirror
CN113191200A (en) Push-up test counting method, device, equipment and medium
CN112818800A (en) Physical exercise evaluation method and system based on human skeleton point depth image
WO2020259858A1 (en) Framework for recording and analysis of movement skills
CN113409651B (en) Live broadcast body building method, system, electronic equipment and storage medium
KR102013705B1 (en) Apparatus and method for recognizing user's posture in horse-riding simulator
Kishore et al. Smart yoga instructor for guiding and correcting yoga postures in real time
US20240042281A1 (en) User experience platform for connected fitness systems
CN111639605A (en) Human body action scoring method based on machine vision
CN111353345B (en) Method, apparatus, system, electronic device, and storage medium for providing training feedback
US20230145451A1 (en) Monitoring exercise activity in a gym environment
JP2018187284A (en) Exercise state diagnostic system and exercise state diagnostic program
US20160249834A1 (en) Range of motion capture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant