CN111652078A - Yoga action guidance system and method based on computer vision - Google Patents

Yoga action guidance system and method based on computer vision Download PDF

Info

Publication number
CN111652078A
CN111652078A CN202010393060.XA CN202010393060A CN111652078A CN 111652078 A CN111652078 A CN 111652078A CN 202010393060 A CN202010393060 A CN 202010393060A CN 111652078 A CN111652078 A CN 111652078A
Authority
CN
China
Prior art keywords
trainer
action
evaluation
posture
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010393060.XA
Other languages
Chinese (zh)
Inventor
刘勇
杨小倩
刘振杰
杨建党
崔瑜翔
邹昭源
刘颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN202010393060.XA priority Critical patent/CN111652078A/en
Publication of CN111652078A publication Critical patent/CN111652078A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0003Analysing the course of a movement or motion sequences during an exercise or trainings sequence, e.g. swing for golf or tennis
    • A63B24/0006Computerised comparison for qualitative assessment of motion sequences or the course of a movement
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0062Monitoring athletic performances, e.g. for determining the work of a user on an exercise apparatus, the completed jogging or cycling distance
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B71/0622Visual, audio or audio-visual systems for entertaining, instructing or motivating the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N5/00Computing arrangements using knowledge-based models
    • G06N5/04Inference or reasoning models
    • G06N5/048Fuzzy inferencing
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B24/00Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
    • A63B24/0062Monitoring athletic performances, e.g. for determining the work of a user on an exercise apparatus, the completed jogging or cycling distance
    • A63B2024/0065Evaluating the fitness, e.g. fitness level or fitness index
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B71/0622Visual, audio or audio-visual systems for entertaining, instructing or motivating the user
    • A63B2071/0625Emitting sound, noise or music
    • A63B2071/063Spoken or verbal instructions
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B2230/00Measuring physiological parameters of the user
    • A63B2230/62Measuring physiological parameters of the user posture

Landscapes

  • Engineering & Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Social Psychology (AREA)
  • Automation & Control Theory (AREA)
  • Fuzzy Systems (AREA)
  • Psychiatry (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to a yoga action guidance system and a method based on computer vision, wherein the yoga action guidance system comprises: the image acquisition module is used for acquiring images of the current body postures of the trainer and the coach in real time; the image processing and evaluating module is used for acquiring human skeleton information from the image and carrying out evaluation of the posture of the trainer and reasoning of the action guidance statement based on the human skeleton information; the voice broadcasting and evaluation display module is used for reminding the trainer whether the posture is standard or not, giving action guidance according to the action guidance statement and displaying the motion evaluation data of the trainer; and the cloud data management module is respectively interacted with the image acquisition module, the image processing and evaluation module and the voice broadcasting and evaluation display module. The invention provides a convenient action guidance interaction mode for the trainer, helps the trainer to make standard actions without a coach, improves the training efficiency, and simultaneously avoids damaging the body due to wrong training posture.

Description

Yoga action guidance system and method based on computer vision
Technical Field
The invention belongs to the technical field of fitness management training, and particularly relates to a yoga action guidance system and method based on computer vision.
Background
With the continuous development and progress of society, the life rhythm of people is also accelerated continuously, the attention degree of people to health is also improved gradually, more and more people walk into the exercise and fitness line, various fitness courses and APPs on the internet are very numerous at present, and people are popular in home fitness. Yoga is one of them, because it is easy to implement, does not require an instrument, and is thus favored by more people. Yoga not only can solve the tired problem of the office staff who lies over a desk for a long time work, moreover, if persist in practicing yoga, then can promote the metabolism, can let the physique perfect more, cultivates the quality of qi, promotes individual glamour.
According to investigation, most people can learn by finding videos by themselves, and although the requirements of people can be met to a certain extent, the problems of boring taste, difficulty in insisting and the like cannot be avoided in the experience of trainers; meanwhile, the public is difficult to ensure the standardization of actions and possible injuries without the professional guidance of a yoga coach, and a targeted plan is difficult to make according to the self training condition and the target; in addition, if in the yoga venue, the body-building person needs to pay certain expense, carry out on-the-spot teaching and instruction by the yoga coach, when the person's number of body-building person is more, the yoga coach can not take into account every body-building person, but if carry out the class teaching, can make body-building person's expense increase again.
Some existing embodiments, for example, patent document CN108853946A, disclose a fitness guidance training system and method based on Kinect, which requires a specific acquisition device, i.e., a Kinect sensor, to extract the position of key points of a human body. For another example, patent document CN106422274A discloses a yoga evaluation system based on multiple sensors, in which the wearable devices are used to calculate the required information, and all the calculations are based on hardware sensor design, and the operation procedures are complicated, which hinders the training of the trainers. In addition, the fitness movement detection and analysis system based on the personnel image recognition utilizes the stored parameter characteristics of the standard fitness movement and the collected data characteristics for comparison, needs to use a standard movement template, and has general applicability.
Disclosure of Invention
Based on the above disadvantages in the prior art, the present invention aims to provide a yoga action guidance system and method based on computer vision, which establishes a flexible and efficient yoga posture assessment system, can help a yoga trainer to adjust action details in time, improve training efficiency, and simultaneously avoid body injury due to a wrong training posture.
The purpose of the invention can be realized by the following technical scheme:
a yoga action guidance system based on computer vision, comprising:
the image acquisition module is used for acquiring images of the current body postures of the trainer and the coach in real time;
the image processing and evaluating module is used for acquiring human skeleton information from the image and carrying out evaluation of the posture of the trainer and reasoning of the action guidance statement based on the human skeleton information;
the voice broadcasting and evaluation display module is used for reminding the trainer whether the posture is standard or not, giving action guidance according to the action guidance statement and displaying the motion evaluation data of the trainer;
the cloud data management module is respectively in communication connection with the image acquisition module, the image processing and evaluation module and the voice broadcasting and evaluation display module, and is used for receiving and storing image data uploaded by the image acquisition module, sending the image data to the image processing and evaluation module, receiving motion evaluation data and motion guidance sentences uploaded by the image processing and evaluation module, and sending the motion evaluation data and the motion guidance sentences to the voice broadcasting and evaluation display module.
Preferably, the image acquisition module is a source of the trainer motion state information. The module acquires and records the current body states of a trainer and a coach in a training video in an image form, and obtains a training state recording video of the trainer within a period of time by arranging a plurality of images in a time sequence, and the recording video is used as a basic preparation for evaluating algorithm execution.
Preferably, the image processing and evaluating module includes:
the image submodule is used for extracting the key point positions of the human body from the image data and constructing a human body skeleton in a connection line mode according to the key point positions in sequence so as to obtain the exercise posture information of the trainer and the exercise posture information of the coach;
the evaluation submodule is used for comparing the motion posture information of the trainer with the motion posture information of the coach to obtain the motion evaluation data of the trainer; and the motion evaluation device is also used for judging whether to carry out motion guidance according to the motion evaluation data and reasoning motion guidance sentences when motion guidance is required.
Preferably, the evaluation index of the posture comparison includes:
the distance index is the deviation degree of the distance between the same two key point positions in the exercise posture of the trainer and the trainer;
the angle index is the deviation degree between the included angle formed by all bones of the exercise posture of the trainer and the corresponding included angle in the exercise posture of the trainer;
the frequency index is the ratio of the number of times of finishing the periodic actions in the exercise posture of the trainer in unit time to the number of times of finishing the periodic actions in the exercise posture of the trainer in unit time;
and the scale factor is a proportionality coefficient between the size of the skeleton of the trainer and the size of the skeleton of the coach.
As a preferred scheme, the algorithm of posture comparison integrates the standard degree of a single action and the completion degree of a complete motion process, selects an angle index and a frequency index to form a combined evaluation index, and automatically weights the key point position of each index by an entropy weight method based on a time sequence so as to obtain the comprehensive score of the action of a trainer;
dividing the comprehensive scores into four grades of failing, qualified, good and excellent; if the comprehensive score is good or qualified, outputting an instruction for reasoning the action guidance statement, and reasoning the action guidance statement according to the matching condition of the key point position; and if the comprehensive score is not qualified, outputting a command for self-adjusting according to the running posture of the coach.
Preferably, the action guidance statement includes:
instruction type 1: the problem of bending;
instruction type 2: limb rotation problems;
instruction type 3: limb distance adjustment problems;
instruction type 4: the problem of waist twisting action;
instruction type 5: the action problem that the patient needs to bend sideways;
instruction type 6: the action problem that the crotch needs to be pushed and contracted is solved;
instruction type 7: the movement problem of the chest needs to be lifted and the chest is contained;
further, the derivation method of the instruction type 1 is as follows: the bending problem is based on a judgment function based on a key point selected from the group consisting of neck, right shoulder, right elbow, right wrist, left shoulder, left elbow, left wrist, right hip, right knee, right ankle, left hip, left knee, and left ankle
Figure BDA0002486339700000031
Wherein,
Figure BDA0002486339700000032
is the bending angle of key point B in the coach video, A and C are adjacent key points of key point B, VABCThe bend angle of key point B in the trainer video, and A and C are the neighboring key points of key point B. When p is1If it is a negative number, the direction is determined to be straight, and when p is1When the direction is positive, the direction is judged to be bending. Degree directly using | p1The degree of bending is defined.
Further, the method can be used for preparing a novel materialThe derivation method of the instruction type 2 is as follows: the rotation of the extremity is determined by a direction determination function based on a key selected from the group consisting of the length of the left toe and the left wrist, the length of the right toe and the right wrist, the length of the left toe and the left ankle, and the length of the right toe and the right ankle
Figure BDA0002486339700000033
Wherein k is a scale factor,
Figure BDA0002486339700000034
is the connection length, L, of key points A and B in the coach videoABThe length of a connecting line between key points A and B in the video of the trainer is defined, the key point A is any one point of the left hand tip, the right hand tip, the left foot tip and the right foot tip, and the key point B is a point corresponding to the key point A in the left wrist, the right wrist, the left ankle and the right ankle. When p is2If the direction is larger than 1, the direction is judged to deviate from the direction parallel to the camera to rotate; when p is2When the direction is less than 1, the direction is judged to be parallel to the direction of the camera.
Further, the derivation method of the instruction type 3 is as follows: the problem of adjusting the distance between the extremities is based on the key points selected from the distance between the left and right wrists, the distance between the left and right elbows, the distance between the left and right knees, and the distance between the left and right ankles, and the discriminant function is
Figure BDA0002486339700000035
Wherein k is a scale factor as defined above, LABThe distance between the key points of the distance between the left wrist and the right wrist, the distance between the left elbow and the right elbow, the distance between the left knee and the right knee and the distance between the left ankle and the right ankle in the video of the trainer,
Figure BDA0002486339700000041
the distances of key points of the distances between the left wrist and the right wrist, between the left elbow and the right elbow, between the left knee and the right knee and between the left ankle and the right ankle in the training video are shown. When p is3If the direction is larger than 1, the direction is judged to be reduced; when p is3If the value is less than 1, the direction is determined to be increased.
Further, the derivation method of the instruction type 4 is as follows: the problem of waist twisting isThe directions include twisting and back twisting according to key points selected from the distance between the left shoulder and the right shoulder and the distance between the left hip and the right hip. The direction discrimination function is
Figure BDA0002486339700000042
Wherein L isABThe distance between the left shoulder and the right shoulder of the trainer, LCDThe distance between the left hip and the right hip of the trainer,
Figure BDA0002486339700000043
for training the connecting distance between the left shoulder and the right shoulder,
Figure BDA0002486339700000044
the distance between the left hip and the right hip is the training line. When p is4If the waist twisting direction is larger than 1, judging the direction to be back twisting; when p is4When the direction is less than 1, the direction is judged to be waist twisting.
Further, the derivation method of the instruction type 5 is as follows: for the action problem needing to bend laterally, the angle between the connecting line of the left shoulder and the right shoulder and the angle between the connecting line of the left hip and the right hip and the horizontal angle are used as comparison factors, and the angle solving formula is as follows
Figure BDA0002486339700000045
Wherein, YAIs the Y-axis coordinate of the left shoulder or hipBIs the Y-axis coordinate, X, of the right shoulder or hipAAs X-axis coordinate of the left shoulder or hipBThe X-axis coordinate of the right shoulder or hip, the direction includes lateral bending and back bending. The direction discrimination function and degree of the direction are deduced by two factors through a fuzzy reasoning method. The synthesis adopts a maximum-minimum method, and the fuzzy implication adopts an intersection method.
Figure BDA0002486339700000046
A' is the bending degree of the angle between the connecting line of the left shoulder and the right shoulder and the horizontal plane, Ai→Z5iThe contribution value of the ith result in the discriminant function is corresponding to the bending degree of the angle between the connecting line of the left shoulder and the right shoulder and the horizontal, B' is the bending degree of the connecting line of the left hip and the right hip and the horizontal angle, Bi→Z5iThe contribution value of the ith result in the discrimination function corresponding to the bending degree of the angle between the connecting line of the left hip and the right hip and the horizontal direction is as follows:
Figure BDA0002486339700000047
wherein Z5For the direction discrimination and degree value of the trainer,
Figure BDA0002486339700000048
the direction discrimination and the degree value of the coach are obtained. When p is5If the direction is positive, the direction is judged to be bent back; when p is5When the direction is negative, the direction is judged to be lateral bending.
Further, the derivation method of the instruction type 6 is as follows: for the action problem requiring top crotch and hip contraction, the factor of comparing the average bending degree of the two knees with the average bending degree of the buttocks is used
Figure BDA0002486339700000049
Wherein VATo the bending angle of the left knee of the trainer video,
Figure BDA00024863397000000410
bending angle of left knee, V, for a coach videoBFor the bending angle of the left hip of the trainer video,
Figure BDA00024863397000000411
obtaining the difference value of the average bending degree of the knees and the average bending degree of the buttocks for the bending angle of the right buttocks of a coach video, wherein the direction comprises top span and hip contraction, and the direction discrimination function and the degree are deduced by two factors through a fuzzy reasoning method. The same synthesis adopts the maximum-minimum method, and the fuzzy implication adopts the intersection method. When p is6If the direction is positive, the direction is judged to be the shrinking span; when p is6When the direction is negative, the direction is judged as the top crotch.
Further, the derivation method of the instruction type 7 is as follows: for the action problem of chest-lift and chest-containing action, the average bending degree of the buttocks and the bending of the neck are compared by using the average bending angle of the neck and the hip and the length of two shoulders as the factorsAnd deducing a primary judgment result according to the curvature degree, and finely adjusting the degree according to the length of the two shoulders after the primary judgment result is obtained. The preliminary judgment result is obtained by an inference model in the action problems of the top crotch and the contracted crotch. Obtaining a preliminary judgment result Z7Then, the direction discrimination function is:
Figure BDA0002486339700000051
wherein L isABFor the length of the two shoulders in the video of the trainer,
Figure BDA0002486339700000052
k is the scale factor defined above for the length of the shoulders in the trainer video. A' is the degree of flexion of the neck, Ai→Z7iThe degree of neck flexion corresponds to the contribution of the ith result in the discriminant function, B' is the degree of hip flexion, Bi→Z7iThe contribution of the ith result in the discriminant function is assigned to the degree of curvature of the buttocks. When p is7If the direction is positive, the direction is judged to be chest-lifting; when p is7If the direction is negative, the direction is judged as containing the chest.
All instruction types are provided with priorities; and if the priorities are the same, executing according to the numbering sequence of the instruction types.
The data obtained by the image acquisition module is transmitted to the cloud in real time and stored by the cloud data management module, the data is transmitted to the image processing and evaluation module, the module contrasts and evaluates the actions of a trainer and infers action guide sentences, the evaluative and instructive data obtained by the image processing and evaluation module is transmitted to the voice broadcasting and evaluation display module by the cloud data management module, and the evaluation result and the instructive sentences are output in an image, character and voice mode to grade and guide the actions of the trainer.
The image data storage format in the image acquisition module is RGB image information with any size, and the RGB image information is acquired through a common camera adapted to a mobile phone or an external camera of a personal computer and the like. And sending the acquired image data to a cloud data processing module.
The cloud data processing module is a data storage and transfer center. The system is composed of a server, is responsible for managing the data flow of the whole system, and is an intermediate management system of the whole system.
The cloud data management module transmits and stores the video obtained by the image acquisition module, confirms that a data link between the image processing and evaluation module and the cloud data management module is complete, and encapsulates the storage data which is just received and sends the storage data to the image processing and evaluation module after a data request instruction of the cloud data management module.
The image processing and evaluation module is a trainer posture analysis and evaluation center. The module extracts human body posture information from the acquired image information, firstly extracts key point positions of a human body in the image, and draws a human body skeleton to describe the posture of the current trainer. And comparing the obtained trainer posture information with the coach posture information to obtain a comprehensive evaluation score and deducing an action guidance statement.
And after the image processing and evaluation module finishes the guiding statement reasoning process, performing data encapsulation on the obtained evaluation result and the instruction statement. And meanwhile, a data sending request is sent to the cloud data management module, after the cloud data module receives the data sending request and confirms that the channel is smooth, the cloud data management module sends a feedback signal to the image processing and evaluating module, the image processing and evaluating module packages and sends evaluation result data to the cloud data management module, and after the cloud data management module receives the data, the data is packaged and stored, and the data is packaged and sent to the voice broadcasting and evaluation display module.
The voice broadcasting and evaluation display module is a module for directly guiding a trainer. The module is completed by a terminal, and the terminal is PC equipment or mobile phone equipment. The voice broadcasting and evaluation display module comprises a voice broadcasting sub-module and is used for carrying out voice output on data transmitted by the cloud data management module; the voice broadcasting and evaluation display module further comprises a display module, the display module can be arranged at a PC end or a mobile phone end, and image information transmitted in the cloud data management module is displayed.
As a preferred scheme, in the display module, the displayed part comprises a standard yoga video, a comprehensive evaluation index and a skeleton model of a standard yoga video actor, wherein the skeleton is drawn on the body of a human body in the standard video, in addition, the image information display also comprises a skeleton model of a trainer, and the skeleton is drawn on the body of the human body. The display system also displays all the obtained evaluation standard information on the corresponding 22 key points, wherein the evaluation indexes comprise a distance index, an angle index, a frequency index, a comprehensive evaluation index and a part needing to be adjusted.
As a preferred scheme, the image acquisition module is arranged on a computer camera, other common cameras or a mobile phone camera, the computer camera and the mobile phone camera directly input acquired images to the PC end and the mobile phone end, and the common cameras are input to the PC end through data lines or data addresses. The acquired image content comprises a complete body posture of the user and a training scene where the user is located, and the training is realized by adjusting the working position of the image acquisition module, such as adjusting the height of the mobile phone support, adjusting the distance of a viewing range and the like.
As a preferred scheme, the sending mode of the images in the image acquisition module supports two modes of online uploading and offline uploading, namely, a user records and uploads a training video in real time and uploads the recorded training video. The online uploading mode is used for acquiring image information and sending the image information to the cloud data processing module for processing in real time; and uploading the video with the specified length according to the user requirement in an offline uploading mode.
The invention also provides a yoga action guiding method based on computer vision, which comprises the following steps:
s1, acquiring real-time image data of a trainer and a coach;
s2, extracting the positions of key points of a human body in the image by using an Openpos algorithm, extracting and constructing a human body skeleton by a bottom-up method, and respectively obtaining the exercise posture information of a trainer and the exercise posture information of a coach;
s3, comparing the exercise posture information of the trainer with the exercise posture information of the coach to obtain exercise evaluation data of the trainer;
s4, judging whether to give instructions for action according to the motion evaluation data; if yes, deducing an action guide statement;
and S5, performing yoga action instruction according to the action instruction sentence.
Preferably, in step S3, the evaluation index of the posture comparison includes:
the distance index is the deviation degree of the distance between the same two key point positions in the exercise posture of the trainer and the trainer;
the angle index is the deviation degree between the included angle formed by all bones of the exercise posture of the trainer and the corresponding included angle in the exercise posture of the trainer;
the frequency index is the ratio of the number of times of finishing the periodic actions in the exercise posture of the trainer in unit time to the number of times of finishing the periodic actions in the exercise posture of the trainer in unit time;
and the scale factor is a proportionality coefficient between the size of the skeleton of the trainer and the size of the skeleton of the coach.
As a preferred scheme, in step S3, the gesture comparison algorithm integrates the standard degree of a single action and the completion degree of a complete movement process, selects an angle index and a frequency index to form a joint evaluation index, and automatically weights the key point position of each index by an entropy weight method based on a time sequence to obtain a comprehensive score of the action of the trainer;
the step S4 includes: dividing the comprehensive scores into four grades of failing, qualified, good and excellent; if the comprehensive score is good or qualified, outputting an instruction for reasoning the action guidance statement, and reasoning the action guidance statement according to the matching condition of the key point position; and if the comprehensive score is not qualified, outputting a command for self-adjusting according to the running posture of the coach.
Preferably, the action guidance statement includes:
instruction type 1: the problem of bending;
instruction type 2: limb rotation problems;
instruction type 3: limb distance adjustment problems;
instruction type 4: the problem of waist twisting action;
instruction type 5: the action problem that the patient needs to bend sideways;
instruction type 6: the action problem that the crotch needs to be pushed and contracted is solved;
instruction type 7: the motion problem of the chest needs to be lifted and contained is solved.
Further, the derivation method of the instruction type 1 is as follows: the bending problem is based on a judgment function based on a key point selected from the group consisting of neck, right shoulder, right elbow, right wrist, left shoulder, left elbow, left wrist, right hip, right knee, right ankle, left hip, left knee, and left ankle
Figure BDA0002486339700000081
Wherein,
Figure BDA0002486339700000082
is the bending angle of key point B in the coach video, A and C are adjacent key points of key point B, VABCThe bend angle of key point B in the trainer video, and A and C are the neighboring key points of key point B. When p is1If the direction is a negative number, the direction is judged to be straight; when p is1When the direction is positive, the direction is judged to be bending. Degree directly using | p1The degree of bending is defined.
Further, the derivation method of the instruction type 2 is as follows: the rotation of the extremity is determined by a direction determination function based on a key selected from the group consisting of the length of the left toe and the left wrist, the length of the right toe and the right wrist, the length of the left toe and the left ankle, and the length of the right toe and the right ankle
Figure BDA0002486339700000083
Wherein k is a scale factor,
Figure BDA0002486339700000084
the length of the connecting line of key points A and B in the coach video. L isABThe length of the connecting line between the key points A and B in the trainer video, the key point A is the leftAny one point of the hand tip, the right hand tip, the left foot tip and the right foot tip, and the key point B is a point corresponding to the key point A in the left wrist, the right wrist, the left ankle and the right ankle. When p is2If the direction is larger than 1, the direction is judged to deviate from the direction parallel to the camera to rotate; when p is2When the direction is less than 1, the direction is judged to be parallel to the direction of the camera.
Further, the derivation method of the instruction type 3 is as follows: the problem of adjusting the distance between the extremities is based on the key points selected from the distance between the left and right wrists, the distance between the left and right elbows, the distance between the left and right knees, and the distance between the left and right ankles, and the discriminant function is
Figure BDA0002486339700000085
Wherein k is a scale factor as defined above, LABThe distance between the key points of the distance between the left wrist and the right wrist, the distance between the left elbow and the right elbow, the distance between the left knee and the right knee and the distance between the left ankle and the right ankle in the video of the trainer,
Figure BDA0002486339700000086
the distances of key points of the distances between the left wrist and the right wrist, between the left elbow and the right elbow, between the left knee and the right knee and between the left ankle and the right ankle in the training video are shown. When p is3If the direction is larger than 1, the direction is judged to be reduced; when p is3If the value is less than 1, the direction is determined to be increased.
Further, the derivation method of the instruction type 4 is as follows: the waist-twisting action problem is based on a key point selected from the distance between the left shoulder and the right shoulder and the distance between the left hip and the right hip, the direction including waist-twisting and back-twisting. The direction discrimination function is
Figure BDA0002486339700000087
Wherein L isABThe distance between the left shoulder and the right shoulder of the trainer, LCDThe distance between the left hip and the right hip of the trainer,
Figure BDA0002486339700000088
for training the connecting distance between the left shoulder and the right shoulder,
Figure BDA0002486339700000089
the distance between the left hip and the right hip is the training line. When p is4When the value is more than 1, the direction is judged as back twisting, and when p is greater than 14When the direction is less than 1, the direction is judged to be waist twisting.
Further, the derivation method of the instruction type 5 is as follows: for the action problem needing to bend laterally, the angle between the connecting line of the left shoulder and the right shoulder and the angle between the connecting line of the left hip and the right hip and the horizontal angle are used as comparison factors, and the angle solving formula is as follows
Figure BDA0002486339700000091
Wherein, YAIs the Y-axis coordinate of the left shoulder or hipBIs the Y-axis coordinate, X, of the right shoulder or hipAAs X-axis coordinate of the left shoulder or hipBThe X-axis coordinate of the right shoulder or hip, the direction includes lateral bending and back bending. The direction discrimination function and degree of the direction are deduced by two factors through a fuzzy reasoning method. The synthesis adopts a maximum-minimum method, and the fuzzy implication adopts an intersection method.
Figure BDA0002486339700000092
A' is the bending degree of the angle between the connecting line of the left shoulder and the right shoulder and the horizontal plane, Ai→Z5iThe contribution value of the ith result in the discriminant function is corresponding to the bending degree of the angle between the connecting line of the left shoulder and the right shoulder and the horizontal, B' is the bending degree of the connecting line of the left hip and the right hip and the horizontal angle, Bi→Z5iThe contribution value of the ith result in the discrimination function corresponding to the bending degree of the angle between the connecting line of the left hip and the right hip and the horizontal direction is as follows:
Figure BDA0002486339700000093
wherein Z5For the direction discrimination and degree value of the trainer,
Figure BDA0002486339700000094
the direction discrimination and the degree value of the coach are obtained. When p is5If the direction is positive, the direction is judged to be bent back; when p is5When the direction is negative, the direction is judged to be lateral bending.
Further onThe derivation method of the instruction type 6 is as follows: for the action problem requiring top crotch and hip contraction, the factor of comparing the average bending degree of the two knees with the average bending degree of the buttocks is used
Figure BDA0002486339700000095
Wherein VATo the bending angle of the left knee of the trainer video,
Figure BDA0002486339700000096
bending angle of left knee, V, for a coach videoBFor the bending angle of the left hip of the trainer video,
Figure BDA0002486339700000097
obtaining the difference value of the average bending degree of the knees and the average bending degree of the buttocks for the bending angle of the right buttocks of a coach video, wherein the direction comprises top span and hip contraction, and the direction discrimination function and the degree are deduced by two factors through a fuzzy reasoning method. The same synthesis adopts the maximum-minimum method, and the fuzzy implication adopts the intersection method. When p is6If the direction is positive, the direction is judged to be the shrinking span; when p is6When the direction is negative, the direction is judged as the top crotch.
Further, the derivation method of the instruction type 7 is as follows: for the action problem needing to push up the chest and contain the chest, the average bending angle of the neck and the hip and the length of the two shoulders are used as comparison factors, the average bending degree of the hip and the bending degree of the neck derive a primary judgment result, and the degree is finely adjusted according to the length of the two shoulders after the primary judgment result is obtained. The preliminary judgment result is obtained by an inference model in the action problems of the top crotch and the contracted crotch. Obtaining a preliminary judgment result Z7Then, the direction discrimination function is:
Figure BDA0002486339700000098
wherein L isABFor the length of the two shoulders in the video of the trainer,
Figure BDA0002486339700000099
for coachingThe length of the two shoulders in the video, k is the scale factor defined above. A' is the degree of flexion of the neck, Ai→Z7iThe degree of neck flexion corresponds to the contribution of the ith result in the discriminant function, B' is the degree of hip flexion, Bi→Z7iThe contribution of the ith result in the discriminant function is assigned to the degree of curvature of the buttocks. When p is7If the direction is positive, the direction is judged to be chest-lifting; when p is7If the direction is negative, the direction is judged as containing the chest.
As a preferable scheme, the yoga action guiding method based on computer vision further comprises the following steps:
setting priorities for all instruction types; if the priorities are the same, executing according to the numbering sequence of the instruction types;
if the direction and the degree of any instruction type are deduced, the compared key points, the direction and the degree form an action guidance statement together.
In order to make the instructional sentences not conflict with each other and maximize the effect of the adjustment, the priorities of all instructions are set, for example: the priority levels of the instruction types 1,2, 3 and 4 are 1,2, 3 and 3 respectively, the priority levels of the instruction types 5, 6 and 7 are all 4, and when the priority levels are the same, execution is performed in the order of instruction numbers.
When the image processing and evaluating module deduces the direction and the degree of a certain instruction type, the compared key points, the direction and the degree jointly form an instruction sentence.
Compared with the prior art, the invention has the following beneficial effects:
the invention only needs vision to judge the action, simplifies the operation process of the trainer and does not hinder the trainer from exercising. Moreover, the invention provides an inference method for action instruction guidance, which deduces action modification instruction sentences according to the difference between the action of a trainer and the action of a coach so as to carry out voice guidance interaction and simulate the real guidance of the coach.
According to the invention, through action evaluation based on key points of a human body, a trainer can select any training video to compare postures, so that a better body building effect is obtained. When pictures or video streams are input, a skeleton of a human body is obtained by using a model, the motion is analyzed by calculating the angle between joints of the human body, the posture characteristics of a yoga trainer are compared with a yoga video to be simulated, whether the yoga motion is standard or not is determined, a suggestion is provided for the trainer or a warning is given when the posture deviation exceeds a certain limit, the trainer is guided to correct the posture of the trainer in real time, and the trainer can check the result of the evaluation of the training after the training is finished.
According to the yoga action guidance system based on computer vision, the picture acquisition module and the voice broadcasting and evaluation display module are arranged at the mobile phone end or the PC end, and special image acquisition equipment and display and broadcasting equipment are not needed, so that a trainer can acquire images and obtain evaluation and guidance information easily. The cloud data management module coordinates data transmission and storage, enlarges the processing capacity of image processing and evaluation, and provides a trainer with a function of checking historical data. The cloud data management module can receive data transmitted by the image acquisition modules at the same time, the image processing and evaluation module can provide evaluation and action instruction derivation for data of multiple trainers in real time, and the system can provide instructive service for the multiple trainers at the same time. Through voice broadcast and evaluation display module, will train contrast image, evaluation structure and instruct the command output with the pronunciation, provide convenient action for the training person and instruct interactive mode. The multifunctional fitness room can help people to make standard actions without coaches, and can achieve the effect of a fitness room at home. Simultaneously, to the beginner very much, this auxiliary system improves training efficiency through giving the instruction of yoga training person in time adjusting the action detail, avoids simultaneously because wrong training posture damages the health. By building a complete system combining the camera device and the personal computer, the yoga learning and training of individuals are facilitated, and the yoga learning and monitoring system can be applied to a teaching and monitoring scene or an automatic assessment scene of the yoga learning.
The yoga action guiding method based on computer vision provided by the invention has the advantages of concise and efficient algorithm, wide application range and relatively low requirement on the operational capability of hardware equipment such as sensors, computers and the like, so that the yoga action guiding method based on computer vision can be applied to multiple platforms. The comprehensive evaluation algorithm provides comprehensive and accurate evaluation results. The action instruction derivation algorithm further derives the image comparison result into a command suitable for the posture adjustment of the trainer in a more humanized mode.
Drawings
Fig. 1 is a frame diagram of a yoga action guidance system based on computer vision according to an embodiment of the present invention;
fig. 2 is a flowchart of a yoga action guidance method in computer vision according to an embodiment of the present invention;
FIG. 3 is a human key point definition and skeleton diagram according to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and the described embodiments are only some embodiments, but not all embodiments, of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present invention without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the yoga guidance system based on image recognition in the embodiment of the present invention includes an image acquisition module, a cloud data management module, an image processing and evaluation module, and a voice broadcast and evaluation display module. The cloud data management module is respectively connected with the image acquisition module, the image processing and evaluation module and the voice broadcasting and evaluation display module.
Image data obtained by the image acquisition module is transmitted to the cloud in real time and stored through the cloud data management module, the data can be transmitted to the image processing and evaluation module, the image processing and evaluation module carries out comparison evaluation on actions of a trainer and deduces action guide sentences, evaluative and instructive data obtained by the image processing and evaluation module is transmitted to the voice broadcasting and evaluation display module through the cloud data management module, and evaluation results and instructive sentences are output in an image, character and voice mode to score and guide the actions of the trainer.
The image acquisition module is the source of the trainer motion state information. The module acquires and records the current body states of a trainer and a coach in a training video in an image form, and obtains a training state recording video of the trainer within a period of time by arranging a plurality of images in a time sequence, and the recording video is used as a basic preparation for evaluating algorithm execution.
The image data storage format is RGB image information with any size, and the RGB image information is acquired through a common camera adaptive to a mobile phone or an external camera of a personal computer and the like. And sending the acquired image data to a cloud data management module.
The image acquisition module is arranged on a computer camera, other common cameras or a mobile phone camera, the computer camera and the mobile phone camera directly input acquired images into the PC end and the mobile phone end, and the common cameras are input into the PC end through data lines or data addresses. The acquired image content comprises the body posture of a complete trainer and a training scene where the trainer is located, and the image acquisition module is adjusted to work, for example, the height of the mobile phone support is adjusted, the viewing range is adjusted at a distance, and the like.
The image sending mode in the image acquisition module supports two modes of online uploading and offline uploading, namely that a trainer records and uploads a training video in real time and uploads the recorded training video. And in the online uploading mode, image information is acquired according to a certain frequency and is sent to the cloud data management module in real time for processing. And the offline uploading mode uploads videos with specified lengths according to the requirements of the trainer.
In practical application, the image acquisition module is placed at a fixed position, the complete body of a person can be captured, a trainer moves in the sensing range of the camera, and if the body part leaves the picture, the posture estimation module is incomplete or has large errors.
The cloud data management module is a data storage and transfer center. The human body part in the image information stored in the cloud end data in the module is a pixel block divided according to visual information, and the human body posture information also comprises wearing dressing information of a person, facial expression information and other redundant information.
The cloud data management module is composed of a server, is responsible for managing data flow of the whole system and is a middle management system of the whole system.
The cloud data management module receives and stores the data of the video with a certain frame rate obtained by the image acquisition module, and then packages the storage data which is just received to the image processing and evaluation module after confirming that a data link between the image processing and evaluation module and the cloud data management module is complete and a data request instruction of the cloud data management module exists.
The image processing and evaluation module is a trainer posture analysis and evaluation center. The module extracts human body posture information from acquired image information, firstly, an Openpos algorithm is selected to extract key point positions of a human body in an image, the key points are key joints, namely joints with certain freedom degree and parts with marking property, the method extracts a human body skeleton by a bottom-up method, a skeleton model diagram is shown in figure 3, the joints of the human body and the connection among the bones are reconstructed in a point-and-line mode according to the characteristics of the skeleton model diagram, and the human body skeleton information is reconstructed to describe the posture of a current trainer. And scoring the current trainer state according to the posture evaluation standard of the trainer video uploaded by the trainer to serve as a reference basis of the final voice broadcasting and evaluation display module.
In the actually acquired video, the human body skeleton information has very similar human body actions and postures in continuous frames, and in order to improve the efficiency, a method of frame-by-frame estimation is adopted.
And after the image processing and evaluation module obtains the skeleton information of the trainer, comparing the exercise posture information of the trainer with the coach exercise posture information according to an evaluation algorithm to obtain a comprehensive evaluation score. After receiving the data sent by the cloud data management module, the image processing and evaluation module compares the trainer image with the coach image to form a basic comparison index: distance index, angle index, frequency index and scale factor. And then, carrying out comprehensive posture evaluation, wherein angle information and frequency information are selected to form a combined evaluation index, and key points of each basic index are automatically weighted by an entropy weight method based on a time sequence so as to give a comprehensive score of the action of the trainer. And evaluating and judging whether action guidance is needed or not according to the comprehensive evaluation index. The action instruction commands are divided into seven types, action instruction elements are obtained through reasoning according to basic comparison indexes, and finally the instruction elements are spliced and sent to the cloud data module.
Wherein the skeleton information means: the yoga is evaluated by the image processing and evaluating module mainly by observing the stretching of muscles and ligaments of each part of a human body, and the stretching degree is analyzed through the relation between key points. The definition and skeleton diagram of the human body key points used in the embodiment of the invention are shown in fig. 3, there are 22 key points, and the corresponding relationship of the serial numbers is as follows: 0-nose, 1-neck, 2-right shoulder, 3-right elbow, 4-right wrist, 5-left shoulder, 6-left elbow, 7-left wrist, 8-right hip, 9-right knee, 10-right ankle, 11-left hip, 12-left knee, 13-left ankle, 14-right eye, 15-left eye, 16-right ear, 17-left ear, 18-right hand end, 19-left hand end, 20-right foot end, 21-left foot end, each point being represented by a coordinate (x, y) in the image coordinate system.
The image processing and evaluating module firstly compares the trainer with the training image to obtain four indexes of a distance index, an angle index, a frequency index and a scale factor.
Wherein, the distance index is: the distance is determined by coordinates of two key points, and if the two key points are respectively represented by A (X)1,Y1)、B(X2,Y2) When the distance between AB is obtained, L is representedABThe default value is 0, and if one of the points is not detected, the default value is taken. The calculation formula is as follows:
Figure BDA0002486339700000131
the angle index is as follows: the angle is determined by coordinates of three key points, and if three key points are respectively used, A (X) is used1,Y1)、B(X2,Y2)、C(X3,Y3) When the angle of the vertex B is found, it is represented as VABCThe value range is [0,180 ]]The default value is 0. If one of the keypoints is not detected, a default value is taken. The calculation formula is as follows:
Figure BDA0002486339700000132
the frequency index is: the number of times a periodic motion is completed in a unit time is denoted as F, and is used to indicate the speed of motion of a motion. In one action, some key points are taken as important mark points, the default value is 0, and if one mark point is not detected, the default value is taken.
In addition, in order to solve the problem that the distance between the yoga trainer and the trainer video is inconsistent, the embodiment of the invention adds a scale factor when comparing the lengths of two key points:
Figure BDA0002486339700000141
wherein L isABFor the distance between the neck key point and the nose key point in the trainer video,
Figure BDA0002486339700000142
distance, L, of neck and nose key points in a coach videoCDThe distance between two eye key points in the trainer video,
Figure BDA0002486339700000143
is the distance between two eye key points in the trainer video.
The weighting method in the combined index is as follows: the joint angle is denoted by V and is defined as the median angle of the three key points. Setting a total of J key points, calculating the included angle of each key point, which is represented as VjJ ∈ {1, 2.. J }, whereby all joints of the coach can be represented as
Figure BDA0002486339700000144
All joints of the trainer may be denoted as VjJ ∈ {1, 2.. J }, the degree of deviation of the trainer from the trainer at various angles is expressed as:
Figure BDA0002486339700000145
the combined degree of deviation of the trainer from the coach at various angles is expressed as:
Figure BDA0002486339700000146
wherein
Figure BDA0002486339700000147
The deviation degree of the trainer from the coach at various angles. Wherein, the weight of each angle is automatically obtained by the following method:
Figure BDA0002486339700000148
wherein, i is a time sequence, i equals 10 to be a current time angle, and i equals 9 to be an angle of a last sampling moment.
Figure BDA0002486339700000149
The difference between the angle of the trainer image and the trainer image of the jth key point in the ith time sequence. k is 1/ln (j) > 0, satisfies ej>0。
Figure BDA00024863397000001410
The frequency of motion is denoted F and is defined as the number of repetitive motions performed per unit time. With F*The frequency of a coach in a video training is represented, F represents the frequency of an exerciser in a corresponding video, and the deviation degree proportion of the exercise frequency of the exerciser and the exercise frequency of the coach is as follows:
Figure BDA00024863397000001411
the joint evaluation index is evaluated by combining the two parameters of the frequency and the angle, and the weights of the two parameters are respectively lambdaFAnd λVThe joint evaluation is used for comparing the trainer action and the coach action each time, namely after the trainer action and the coach action are compared once, a joint index value is obtained, and the calculation formula of the joint evaluation index is as follows:
Figure BDA00024863397000001412
in practical application, if the motion has frequency part, take lambdaF=λVNo frequency partDivide then take lambdaF=0,λV1. The joint rating table for the single comparisons thus obtained is shown in table 1.
TABLE 1 Joint evaluation rating Table for Single comparison
Figure BDA00024863397000001413
Figure BDA0002486339700000151
After the trainer finishes the whole set of action, the final evaluation of the trainer is given according to the joint evaluation index, the comparison frequency of a common trainer and a coach in the exercise process of the trainer is set to be M, and the parameter is accumulated after each comparison in practical application until the comparison is finished. The number of comparisons obtained for each level of A, B, C, D is respectively recorded as: mA,MB,MC,MDThe comprehensive evaluation grade table for comparison under the whole set of actions is shown in table 2.
TABLE 2 comprehensive evaluation grade chart for comparison under the whole set of actions
Figure BDA0002486339700000152
And when the comprehensive evaluation result is good and qualified, outputting an instruction of motion guidance, and when the comprehensive evaluation result is unqualified, directly reminding the trainer to adjust according to the posture of the coach.
When the yoga comparison and evaluation module identifies that the posture of the yoga trainer is in an adjustable state (the result is good and qualified), an instructive instruction statement is deduced according to the matching condition of the key points and the following rule.
Deriving, based on instruction statements for single object judgment:
bending problem (instruction type 1): key points for comparison (in degrees): { neck, right shoulder, right elbow, right wrist, left shoulder, left elbow, left wrist, right hip, right knee, right ankle, left hip, left knee, left ankle }. The direction is as follows: { bent, straightened }. The decision function for direction is as follows:
Figure BDA0002486339700000153
wherein,
Figure BDA0002486339700000154
is the bending angle of key point B in the coach video, A and C are adjacent key points of key point B, VABCThe bend angle of key point B in the trainer video, and A and C are the neighboring key points of key point B. When p is1If it is a negative number, the direction is determined to be straight, and when p is1When the direction is positive, the direction is judged to be bending. Degree: directly using | p1The degree of bending is defined.
Limb rotation problem (instruction type 2): key points of comparison: { the length of the left toe and the left wrist, the length of the right toe and the right wrist, the length of the left toe and the left ankle, and the length of the right toe and the right ankle }. The direction is as follows: { rotate in a direction parallel to the camera, rotate out of parallel with the camera }. The direction decision function is as follows:
Figure BDA0002486339700000155
wherein k is a scale factor,
Figure BDA0002486339700000156
is the connection length, L, of key points A and B in the coach videoABThe length of a connecting line between key points A and B in the video of the trainer is defined, the key point A is any one point of the left hand tip, the right hand tip, the left foot tip and the right foot tip, and the key point B is a point corresponding to the key point A in the left wrist, the right wrist, the left ankle and the right ankle. When p is2When the value is more than 1, the direction is judged to be deviated from the direction parallel to the camera, and when p is greater than 1, the direction is judged to be rotated towards the direction parallel to the camera2When the direction is less than 1, the direction is judged to be parallel to the direction of the camera. Degree: { slightly, moderately, stressed }. The extent judgment rule table is shown in table 3.
TABLE 3 rule table for judging rotation degree of extremities
Figure BDA0002486339700000161
Limb distance adjustment problem (instruction type 3): key points of comparison: { distance between the left wrist and the right wrist, distance between the left elbow and the right elbow, distance between the left knee and the right knee, and distance between the left ankle and the right ankle }. The direction is as follows: { increase, decrease }. The direction discrimination function is as follows:
Figure BDA0002486339700000162
wherein k is a scale factor as defined above, LABThe distance between the key points of the distance between the left wrist and the right wrist, the distance between the left elbow and the right elbow, the distance between the left knee and the right knee and the distance between the left ankle and the right ankle in the video of the trainer,
Figure BDA0002486339700000163
the distances of key points of the distances between the left wrist and the right wrist, between the left elbow and the right elbow, between the left knee and the right knee and between the left ankle and the right ankle in the training video are shown. When p is3If it is greater than 1, the direction is judged to be decreased, and when p is greater than 13If the value is less than 1, the direction is determined to be increased. Degree: the { slight, moderate, large amplitude } degree judgment rule is shown in table 4.
TABLE 4 rules for determining the degree of adjustment of the extremity distance
Figure BDA0002486339700000164
Waist-wriggling problem (instruction type 4): key points of comparison: { the distance between the left shoulder and the right shoulder and the distance between the left hip and the right hip }. The direction is as follows: { twist the waist, back twist the waist }. The direction discrimination function is as follows:
Figure BDA0002486339700000165
wherein L isABThe distance between the left shoulder and the right shoulder of the trainer, LCDThe distance between the left hip and the right hip of the trainer,
Figure BDA0002486339700000171
for training the connecting distance between the left shoulder and the right shoulder,
Figure BDA0002486339700000172
for exercising the left and right buttocksThe distance of the connecting line. When p is4When the value is more than 1, the direction is judged as back twisting, and when p is greater than 14When the direction is less than 1, the direction is judged to be waist twisting. Degree: { slightly, moderately }. The degree judgment rule is shown in table 5.
TABLE 5 judgment rule table for waist twisting action degree
Figure BDA0002486339700000173
Deriving based on instruction statements for multi-object comprehensive judgment:
for action problems that require side bending (instruction type 5): the factors of comparison are as follows: { angle between the line connecting the left shoulder and the right shoulder and angle between the line connecting the left hip and the right hip and the horizontal }. The above angle solving formula is as follows:
Figure BDA0002486339700000174
wherein, YAIs the Y-axis coordinate of the left shoulder or hipBIs the Y-axis coordinate, X, of the right shoulder or hipAAs X-axis coordinate of the left shoulder or hipBX-axis coordinate for right shoulder or right hip, direction: { lateral bending, bending over }. The direction discrimination function and degree of the method are deduced by two factors through a fuzzy reasoning method: comparison factor and evaluation result Z5The discourse domain of (1) is set as: {5, 12, 20, 30, 40} (manufactured by angularity). The linguistic variables are set as follows: { very small, medium, large, very large }. For the comparison factor and the evaluation result Z5The membership function settings of (a) are shown in table 6.
TABLE 6 comparison of factors with evaluation results Z5Membership function setting table
Figure BDA0002486339700000175
The synthesis adopts a maximum-minimum method, and the fuzzy implication adopts an intersection method.
Figure BDA0002486339700000181
Wherein A' is the bending degree of the connecting line of the left shoulder and the right shoulder and the horizontal angle, Ai→Z5iThe contribution value of the ith result in the discriminant function is corresponding to the bending degree of the angle between the connecting line of the left shoulder and the right shoulder and the horizontal, B' is the bending degree of the connecting line of the left hip and the right hip and the horizontal angle, Bi→Z5iThe contribution value of the ith result in the discriminant function is corresponding to the bending degree of the angle between the connecting line of the left hip and the right hip and the horizontal. The direction judgment function is:
Figure BDA0002486339700000182
wherein Z5For the direction discrimination and degree value of the trainer,
Figure BDA0002486339700000183
the direction discrimination and the degree value of the coach are obtained. The function is a degree of subtraction, when p5If it is positive, the direction is judged to be back, when p5When the direction is negative, the direction is judged to be lateral bending. Degree: { slightly, moderately, stressed }. The degree judgment rule is shown in table 7.
TABLE 7 judgment rule Table for degree of lateral stooping action
Figure BDA0002486339700000184
For action problems that require top and bottom (instruction type 6): the factors of comparison are as follows: { difference between average degree of flexion of knees and average degree of flexion of buttocks }. The difference between the average degree of flexion of the knees and the average degree of flexion of the buttocks is obtained from the above comparison factors by the following formula:
Figure BDA0002486339700000185
wherein VATo the bending angle of the left knee of the trainer video,
Figure BDA0002486339700000186
bending angle of left knee, V, for a coach videoBFor the bending angle of the left hip of the trainer video,
Figure BDA0002486339700000187
the bending angle of the right hip of the coach video. The direction is as follows: { top span, hip reduction }. The direction discrimination function and degree of the method are deduced by two factors through a fuzzy reasoning method: comparison factor and evaluation result Z6The discourse domain of (1) is set as: { -20, -10, -5, 0, 5, 10, 20} (angle system). The linguistic variables are set as follows: { larger negative, smaller negative, moderate positive, smaller positive, larger positive }. For the comparison factor and the evaluation result p6The membership function settings of (a) are shown in table 8.
TABLE 8 comparison of factors and evaluation results p6Membership function setting table
Figure BDA0002486339700000188
Figure BDA0002486339700000191
The same synthesis uses the maximum-minimum method, the ambiguity implies the intersection method, i.e. Z5=Z6,p5=p6. When p is6If the direction is positive, the direction is judged as shrinking, and when p is positive6When the direction is negative, the direction is judged as the top crotch. Degree: { slightly, moderately }. The degree judgment rule is shown in table 9.
TABLE 9 action degree judgment rule table for top crotch and contracted crotch
Figure BDA0002486339700000192
For the motion problem requiring chest lift and chest inclusion (instruction type 7): the factors of comparison are as follows: { bending angle of neck, hip, average and length of shoulders }. And deducing a primary judgment result according to the average bending degree of the buttocks and the bending degree of the neck, and finely adjusting the degree according to the length of the two shoulders after the primary judgment result is obtained. The preliminary judgment result is obtained by an inference model in the action problems of the top crotch and the contracted crotch. Obtaining a preliminary judgment result Z7Then, the direction discrimination function is:
Figure BDA0002486339700000193
wherein L isABFor the length of the two shoulders in the video of the trainer,
Figure BDA0002486339700000194
k is the scale factor defined above for the length of the shoulders in the trainer video. A' is the degree of flexion of the neck, Ai→Z7iThe degree of neck flexion corresponds to the contribution of the ith result in the discriminant function, B' is the degree of hip flexion, Bi→Z7iThe contribution of the ith result in the discriminant function is assigned to the degree of curvature of the buttocks. When p is7If the direction is positive, the direction is judged to be chest-lifting; when p is7If the direction is negative, the direction is judged as containing the chest. Degree: { slightly, moderately }. The degree judgment rule is shown in table 10.
TABLE 10 judging rule table for chest-holding and chest-containing action degree
Figure BDA0002486339700000195
Figure BDA0002486339700000201
In order to make the instructional sentences not conflict with each other and maximize the effect of adjustment, the priority settings of the above instructions are as shown in table 11.
TABLE 11 priority table for each instruction type
Figure BDA0002486339700000202
When the priorities are the same, execution is in order of instruction number. When the image processing and evaluating module deduces the direction and the degree of a certain instruction type, the compared key points, the direction and the degree jointly form an instruction sentence.
And according to the grade division standard, the image processing and evaluating module performs data encapsulation on the obtained evaluation result and the instruction statement. And meanwhile, a data sending request is sent to the cloud data management module, after the cloud data module receives the data sending request and confirms that the channel is smooth, the cloud data management module sends a feedback signal to the image processing and evaluating module, the image processing and evaluating module packages and sends evaluation result data to the cloud data management module, and after the cloud data management module receives the data, the data is packaged and stored, and the data is packaged and sent to the voice broadcasting and evaluation display module.
The voice broadcasting and evaluation display module is completed by a terminal and can be PC equipment or mobile phone equipment. The voice broadcasting submodule mainly outputs voice of data transmitted by the cloud data management module.
The voice broadcasting and evaluation display module further comprises a display module. Wherein, display module can arrange at PC end or mobile phone end, and it shows the image information of transmission in the high in the clouds data management module, and the part that wherein shows includes standard yoga video to and the skeleton model of standard yoga video actor, and wherein, this skeleton is drawn on one's body in the video training person, and this image information shows still including training person's skeleton model in addition, and this skeleton is drawn on one's body in the video training person. The display module can also display all the obtained evaluation standard information on the corresponding 22 key points, wherein the evaluation indexes comprise a distance index, an angle index, a frequency index, a comprehensive evaluation index and a part needing to be adjusted.
Based on the yoga action guidance system of the embodiment of the present invention, as shown in fig. 2, the embodiment of the present invention further provides a yoga action guidance method based on computer vision, which includes the following steps:
s1, acquiring real-time image data of a trainer and a coach;
s2, extracting the positions of key points of a human body in the image by using an Openpos algorithm, extracting and constructing a human body skeleton by a bottom-up method, and respectively obtaining the exercise posture information of a trainer and the exercise posture information of a coach;
s3, comparing the exercise posture information of the trainer with the exercise posture information of the coach to obtain exercise evaluation data of the trainer;
s4, judging whether to give instructions for action according to the motion evaluation data; if yes, deducing an action guide statement; specifically, a determination is made as to whether or not to perform guidance of an action according to the action rating; if the standard is not high, directly prompting the trainer to adjust according to the coach video; if the action is adjustable, the action adjusting statement is inferred; if the standard is met, image data continues to be acquired.
And S5, performing yoga action instruction according to the action instruction sentence, and then ending the process.
In step S3, the evaluation index of the posture comparison includes:
the distance index is the deviation degree of the distance between the same two key point positions in the exercise posture of the trainer and the trainer;
the angle index is the deviation degree between the included angle formed by all bones of the exercise posture of the trainer and the corresponding included angle in the exercise posture of the trainer;
the frequency index is the ratio of the number of times of finishing the periodic actions in the exercise posture of the trainer in unit time to the number of times of finishing the periodic actions in the exercise posture of the trainer in unit time;
and the scale factor is a proportionality coefficient between the size of the skeleton of the trainer and the size of the skeleton of the coach. Specific reference may be made to specific contents in the corresponding yoga action guidance system, which are not described herein again.
In addition, in step S3, the gesture comparison algorithm synthesizes the standard degree of a single action and the completion degree of the complete movement process, selects an angle index and a frequency index to form a joint evaluation index, and automatically weights the key point position of each index by an entropy weight method based on a time sequence to obtain the comprehensive score of the action of the trainer;
step S4 includes: dividing the comprehensive scores into four grades of failing, qualified, good and excellent; if the comprehensive score is good or qualified, outputting an instruction for reasoning the action guidance statement, and reasoning the action guidance statement according to the matching condition of the key point position; and if the comprehensive score is not qualified, outputting a command for self-adjusting according to the running posture of the coach.
Wherein, the action guide sentence comprises:
instruction type 1: the problem of bending;
instruction type 2: limb rotation problems;
instruction type 3: limb distance adjustment problems;
instruction type 4: the problem of waist twisting action;
instruction type 5: the action problem that the patient needs to bend sideways;
instruction type 6: the action problem that the crotch needs to be pushed and contracted is solved;
instruction type 7: the motion problem of the chest needs to be lifted and contained is solved.
The yoga action guidance method according to the embodiment of the present invention corresponds to the yoga action guidance system, and the detailed description of the yoga action guidance system may be referred to in the corresponding specific content, which is not repeated herein.
According to the yoga action guidance system based on computer vision, the picture acquisition module and the voice broadcasting and evaluation display module are arranged at the mobile phone end or the PC end, and special image acquisition equipment and display and broadcasting equipment are not needed, so that a trainer can acquire images and obtain evaluation and guidance information easily. The cloud data management module coordinates data transmission and storage, enlarges the processing capacity of image processing and evaluation, and provides a trainer with a function of checking historical data. The cloud data management module can receive data transmitted by the image acquisition modules at the same time, the image processing and evaluation module can provide evaluation and action instruction derivation for data of multiple trainers in real time, and the system can provide instructive service for the multiple trainers at the same time. Through voice broadcast and evaluation display module, will train contrast image, evaluation structure and instruct the command output with the pronunciation, provide convenient action for the training person and instruct interactive mode. The multifunctional fitness room can help people to make standard actions without coaches, and can achieve the effect of a fitness room at home. Simultaneously, to the beginner very much, this auxiliary system improves training efficiency through giving the instruction of yoga training person in time adjusting the action detail, avoids simultaneously because wrong training posture damages the health. By building a complete system combining the camera device and the personal computer, the yoga learning and training of individuals are facilitated, and the yoga learning and monitoring system can be applied to a teaching and monitoring scene or an automatic assessment scene of the yoga learning.
The yoga action guiding method based on computer vision provided by the invention has the advantages of concise and efficient algorithm, wide application range and relatively low requirement on the operational capability of hardware equipment such as sensors, computers and the like, so that the yoga action guiding method based on computer vision can be applied to multiple platforms. The comprehensive evaluation algorithm provides comprehensive and accurate evaluation results. The action instruction derivation algorithm further derives the image comparison result into a command suitable for the posture adjustment of the trainer in a more humanized mode.
The foregoing has outlined rather broadly the preferred embodiments and principles of the present invention and it will be appreciated that those skilled in the art may devise variations of the present invention that are within the spirit and scope of the appended claims.

Claims (10)

1. A yoga action instruction system based on computer vision, comprising:
the image acquisition module is used for acquiring images of the current body postures of the trainer and the coach in real time;
the image processing and evaluating module is used for acquiring human skeleton information from the image and carrying out evaluation of the posture of the trainer and reasoning of the action guidance statement based on the human skeleton information;
the voice broadcasting and evaluation display module is used for reminding the trainer whether the posture is standard or not, giving action guidance according to the action guidance statement and displaying the motion evaluation data of the trainer;
the cloud data management module is respectively in communication connection with the image acquisition module, the image processing and evaluation module and the voice broadcasting and evaluation display module, and is used for receiving and storing image data uploaded by the image acquisition module, sending the image data to the image processing and evaluation module, receiving motion evaluation data and motion guidance sentences uploaded by the image processing and evaluation module, and sending the motion evaluation data and the motion guidance sentences to the voice broadcasting and evaluation display module.
2. The yoga action guidance system of claim 1, wherein: the image processing and evaluation module comprises:
the image submodule is used for extracting the key point positions of the human body from the image data and constructing a human body skeleton in a connection line mode according to the key point positions in sequence so as to obtain the exercise posture information of the trainer and the exercise posture information of the coach;
the evaluation submodule is used for comparing the motion posture information of the trainer with the motion posture information of the coach to obtain the motion evaluation data of the trainer; and the motion evaluation device is also used for judging whether to carry out motion guidance according to the motion evaluation data and reasoning motion guidance sentences when motion guidance is required.
3. The yoga action guidance system of claim 2, wherein: the evaluation indexes of the posture comparison comprise:
the distance index is the deviation degree of the distance between the same two key point positions in the exercise posture of the trainer and the trainer;
the angle index is the deviation degree between the included angle formed by all bones of the exercise posture of the trainer and the corresponding included angle in the exercise posture of the trainer;
the frequency index is the ratio of the number of times of finishing the periodic actions in the exercise posture of the trainer in unit time to the number of times of finishing the periodic actions in the exercise posture of the trainer in unit time;
and the scale factor is a proportionality coefficient between the size of the skeleton of the trainer and the size of the skeleton of the coach.
4. The yoga action guidance system of claim 3, wherein: the posture comparison algorithm integrates the standard degree of a single action and the completion degree of the complete motion process, selects an angle index and a frequency index to form a combined evaluation index, and automatically weights the key point position of each index by an entropy weight method based on a time sequence so as to obtain the comprehensive score of the action of the trainer;
dividing the comprehensive scores into four grades of failing, qualified, good and excellent; if the comprehensive score is good or qualified, outputting an instruction for reasoning the action guidance statement, and reasoning the action guidance statement according to the matching condition of the key point position; and if the comprehensive score is not qualified, outputting a command for self-adjusting according to the running posture of the coach.
5. The yoga action guidance system of claim 4, wherein: the action guidance statement includes:
instruction type 1: the problem of bending;
instruction type 2: limb rotation problems;
instruction type 3: limb distance adjustment problems;
instruction type 4: the problem of waist twisting action;
instruction type 5: the action problem that the patient needs to bend sideways;
instruction type 6: the action problem that the crotch needs to be pushed and contracted is solved;
instruction type 7: the movement problem of the chest needs to be lifted and the chest is contained;
all instruction types are provided with priorities; and if the priorities are the same, executing according to the numbering sequence of the instruction types.
6. The yoga action guiding method based on computer vision is characterized by comprising the following steps of:
s1, acquiring real-time image data of a trainer and a coach;
s2, extracting the positions of key points of a human body in the image by using an Openpos algorithm, extracting and constructing a human body skeleton by a bottom-up method, and respectively obtaining the exercise posture information of a trainer and the exercise posture information of a coach;
s3, comparing the exercise posture information of the trainer with the exercise posture information of the coach to obtain exercise evaluation data of the trainer;
s4, judging whether to give instructions for action according to the motion evaluation data; if yes, deducing an action guide statement;
and S5, performing yoga action instruction according to the action instruction sentence.
7. The yoga action guidance method of claim 6, wherein the yoga action guidance method comprises: in step S3, the evaluation index of the posture comparison includes:
the distance index is the deviation degree of the distance between the same two key point positions in the exercise posture of the trainer and the trainer;
the angle index is the deviation degree between the included angle formed by all bones of the exercise posture of the trainer and the corresponding included angle in the exercise posture of the trainer;
the frequency index is the ratio of the number of times of finishing the periodic actions in the exercise posture of the trainer in unit time to the number of times of finishing the periodic actions in the exercise posture of the trainer in unit time;
and the scale factor is a proportionality coefficient between the size of the skeleton of the trainer and the size of the skeleton of the coach.
8. The yoga action guidance method of claim 7, wherein the yoga action guidance method comprises: in the step S3, the posture comparison algorithm synthesizes the standard degree of a single action and the completion degree of the complete movement process, selects the angle index and the frequency index to form a joint evaluation index, and automatically weights the key point position of each index by an entropy weight method based on a time sequence to obtain the comprehensive score of the action of the trainer;
the step S4 includes: dividing the comprehensive scores into four grades of failing, qualified, good and excellent; if the comprehensive score is good or qualified, outputting an instruction for reasoning the action guidance statement, and reasoning the action guidance statement according to the matching condition of the key point position; and if the comprehensive score is not qualified, outputting a command for self-adjusting according to the running posture of the coach.
9. The yoga action guidance method of claim 8, wherein the yoga action guidance method comprises: the action guidance statement includes:
instruction type 1: the problem of bending;
instruction type 2: limb rotation problems;
instruction type 3: limb distance adjustment problems;
instruction type 4: the problem of waist twisting action;
instruction type 5: the action problem that the patient needs to bend sideways;
instruction type 6: the action problem that the crotch needs to be pushed and contracted is solved;
instruction type 7: the motion problem of the chest needs to be lifted and contained is solved.
10. The yoga action guidance method of claim 9, wherein the yoga action guidance method comprises:
setting priorities for all instruction types; if the priorities are the same, executing according to the numbering sequence of the instruction types;
if the direction and the degree of any instruction type are deduced, the compared key points, the direction and the degree form an action guidance statement together.
CN202010393060.XA 2020-05-11 2020-05-11 Yoga action guidance system and method based on computer vision Pending CN111652078A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010393060.XA CN111652078A (en) 2020-05-11 2020-05-11 Yoga action guidance system and method based on computer vision

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010393060.XA CN111652078A (en) 2020-05-11 2020-05-11 Yoga action guidance system and method based on computer vision

Publications (1)

Publication Number Publication Date
CN111652078A true CN111652078A (en) 2020-09-11

Family

ID=72344031

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010393060.XA Pending CN111652078A (en) 2020-05-11 2020-05-11 Yoga action guidance system and method based on computer vision

Country Status (1)

Country Link
CN (1) CN111652078A (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967407A (en) * 2020-08-20 2020-11-20 咪咕互动娱乐有限公司 Action evaluation method, electronic device, and computer-readable storage medium
CN112107318A (en) * 2020-09-24 2020-12-22 自达康(北京)科技有限公司 Physical activity ability assessment system
CN112487965A (en) * 2020-11-30 2021-03-12 重庆邮电大学 Intelligent fitness action guiding method based on 3D reconstruction
CN112784699A (en) * 2020-12-31 2021-05-11 康佳集团股份有限公司 Method and system for realizing posture evaluation guidance of sports coach
CN112966597A (en) * 2021-03-04 2021-06-15 山东云缦智能科技有限公司 Human motion action counting method based on skeleton key points
CN112990011A (en) * 2021-03-15 2021-06-18 上海工程技术大学 Body-building action recognition and evaluation method based on machine vision and deep learning
CN113663312A (en) * 2021-08-16 2021-11-19 东南大学 Micro-inertia-based non-apparatus body-building action quality evaluation method
CN113842622A (en) * 2021-09-23 2021-12-28 京东方科技集团股份有限公司 Motion teaching method, device, system, electronic equipment and storage medium
CN113920578A (en) * 2021-09-08 2022-01-11 哈尔滨工业大学(威海) Intelligent home yoga coach information processing system, method, terminal and medium
CN114495169A (en) * 2022-01-26 2022-05-13 广州鼎飞航空科技有限公司 Training data processing method, device and equipment for human body posture recognition
CN114783045A (en) * 2021-01-06 2022-07-22 北京航空航天大学 Virtual reality-based motion training detection method, device, equipment and medium
CN115105821A (en) * 2022-07-04 2022-09-27 合肥工业大学 Gymnastics training auxiliary system based on OpenPose
KR102603914B1 (en) * 2023-08-21 2023-11-21 (주)아힘사 Method, apparatus and system for providing of yoga exercise guide and curriculum using user-costomized anatomy based on image analysis

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202167147U (en) * 2011-06-30 2012-03-14 德信互动科技(北京)有限公司 Yoga/gym teaching device
US20140267611A1 (en) * 2013-03-14 2014-09-18 Microsoft Corporation Runtime engine for analyzing user motion in 3d images
CN105903157A (en) * 2016-04-19 2016-08-31 深圳泰山体育科技股份有限公司 Electronic coach realization method and system
CN107754225A (en) * 2017-11-01 2018-03-06 河海大学常州校区 A kind of intelligent body-building coaching system
CN108853946A (en) * 2018-07-10 2018-11-23 燕山大学 A kind of exercise guide training system and method based on Kinect
CN109011508A (en) * 2018-07-30 2018-12-18 三星电子(中国)研发中心 A kind of intelligent coach system and method
CN110418205A (en) * 2019-07-04 2019-11-05 安徽华米信息科技有限公司 Body-building teaching method, device, equipment, system and storage medium
WO2020054954A1 (en) * 2018-09-11 2020-03-19 Samsung Electronics Co., Ltd. Method and system for providing real-time virtual feedback

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202167147U (en) * 2011-06-30 2012-03-14 德信互动科技(北京)有限公司 Yoga/gym teaching device
US20140267611A1 (en) * 2013-03-14 2014-09-18 Microsoft Corporation Runtime engine for analyzing user motion in 3d images
CN105903157A (en) * 2016-04-19 2016-08-31 深圳泰山体育科技股份有限公司 Electronic coach realization method and system
CN107754225A (en) * 2017-11-01 2018-03-06 河海大学常州校区 A kind of intelligent body-building coaching system
CN108853946A (en) * 2018-07-10 2018-11-23 燕山大学 A kind of exercise guide training system and method based on Kinect
CN109011508A (en) * 2018-07-30 2018-12-18 三星电子(中国)研发中心 A kind of intelligent coach system and method
WO2020054954A1 (en) * 2018-09-11 2020-03-19 Samsung Electronics Co., Ltd. Method and system for providing real-time virtual feedback
CN110418205A (en) * 2019-07-04 2019-11-05 安徽华米信息科技有限公司 Body-building teaching method, device, equipment, system and storage medium

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111967407A (en) * 2020-08-20 2020-11-20 咪咕互动娱乐有限公司 Action evaluation method, electronic device, and computer-readable storage medium
CN111967407B (en) * 2020-08-20 2023-10-20 咪咕互动娱乐有限公司 Action evaluation method, electronic device, and computer-readable storage medium
CN112107318A (en) * 2020-09-24 2020-12-22 自达康(北京)科技有限公司 Physical activity ability assessment system
CN112107318B (en) * 2020-09-24 2024-02-27 自达康(北京)科技有限公司 Physical activity ability evaluation system
CN112487965A (en) * 2020-11-30 2021-03-12 重庆邮电大学 Intelligent fitness action guiding method based on 3D reconstruction
CN112487965B (en) * 2020-11-30 2023-01-31 重庆邮电大学 Intelligent fitness action guiding method based on 3D reconstruction
CN112784699B (en) * 2020-12-31 2024-07-02 康佳集团股份有限公司 Implementation method and system for assessment and guidance of exercise training gestures
CN112784699A (en) * 2020-12-31 2021-05-11 康佳集团股份有限公司 Method and system for realizing posture evaluation guidance of sports coach
CN114783045B (en) * 2021-01-06 2024-08-20 北京航空航天大学 Motion training detection method, device, equipment and medium based on virtual reality
CN114783045A (en) * 2021-01-06 2022-07-22 北京航空航天大学 Virtual reality-based motion training detection method, device, equipment and medium
CN112966597A (en) * 2021-03-04 2021-06-15 山东云缦智能科技有限公司 Human motion action counting method based on skeleton key points
CN112990011A (en) * 2021-03-15 2021-06-18 上海工程技术大学 Body-building action recognition and evaluation method based on machine vision and deep learning
CN113663312A (en) * 2021-08-16 2021-11-19 东南大学 Micro-inertia-based non-apparatus body-building action quality evaluation method
CN113663312B (en) * 2021-08-16 2022-05-13 东南大学 Micro-inertia-based non-apparatus body-building action quality evaluation method
CN113920578A (en) * 2021-09-08 2022-01-11 哈尔滨工业大学(威海) Intelligent home yoga coach information processing system, method, terminal and medium
CN113842622A (en) * 2021-09-23 2021-12-28 京东方科技集团股份有限公司 Motion teaching method, device, system, electronic equipment and storage medium
CN114495169A (en) * 2022-01-26 2022-05-13 广州鼎飞航空科技有限公司 Training data processing method, device and equipment for human body posture recognition
CN115105821A (en) * 2022-07-04 2022-09-27 合肥工业大学 Gymnastics training auxiliary system based on OpenPose
KR102603914B1 (en) * 2023-08-21 2023-11-21 (주)아힘사 Method, apparatus and system for providing of yoga exercise guide and curriculum using user-costomized anatomy based on image analysis

Similar Documents

Publication Publication Date Title
CN111652078A (en) Yoga action guidance system and method based on computer vision
CN108734104B (en) Body-building action error correction method and system based on deep learning image recognition
US12033076B2 (en) Systems and methods for assessing balance and form during body movement
US11069144B2 (en) Systems and methods for augmented reality body movement guidance and measurement
CN103127691B (en) Video-generating device and method
CN107754225A (en) A kind of intelligent body-building coaching system
US20150004581A1 (en) Interactive physical therapy
CN108721870B (en) Exercise training evaluation method based on virtual environment
US20220198835A1 (en) Information processing apparatus, and method
Huang et al. Functional motion detection based on artificial intelligence
CN115497626A (en) Body health assessment method based on joint point identification
CN112933581A (en) Sports action scoring method and device based on virtual reality technology
CN112818800A (en) Physical exercise evaluation method and system based on human skeleton point depth image
US20230240594A1 (en) Posture assessment program, posture assessment apparatus, posture assessment method, and posture assessment system
KR101472817B1 (en) The apparatus and method of management a correctional shape of one's body with fusion health care
CN117503115A (en) Rehabilitation training system and training method for nerve injury
JP6884306B1 (en) System, method, information processing device
CN111312363B (en) Double-hand coordination enhancement system based on virtual reality
CN113842622B (en) Motion teaching method, device, system, electronic equipment and storage medium
CN115410707A (en) Remote diagnosis and treatment and rehabilitation system for knee osteoarthritis
CN115530814A (en) Child motion rehabilitation training method based on visual posture detection and computer deep learning
JP2022187952A (en) Program, method, and information processing device
CN113641856A (en) Method and apparatus for outputting information
CN109887572A (en) A kind of balance function training method and system
CN114984540B (en) Body-building exercise effect evaluation analysis management system based on artificial intelligence

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20200911

RJ01 Rejection of invention patent application after publication