CN110841266A - Auxiliary training system and method - Google Patents

Auxiliary training system and method Download PDF

Info

Publication number
CN110841266A
CN110841266A CN201911018970.3A CN201911018970A CN110841266A CN 110841266 A CN110841266 A CN 110841266A CN 201911018970 A CN201911018970 A CN 201911018970A CN 110841266 A CN110841266 A CN 110841266A
Authority
CN
China
Prior art keywords
module
skeleton key
dimensional skeleton
key point
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911018970.3A
Other languages
Chinese (zh)
Inventor
闫野
尹健
印二威
谢良
乔运浩
邓宝松
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
National Defense Technology Innovation Institute PLA Academy of Military Science
Original Assignee
Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
National Defense Technology Innovation Institute PLA Academy of Military Science
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center, National Defense Technology Innovation Institute PLA Academy of Military Science filed Critical Tianjin (binhai) Intelligence Military-Civil Integration Innovation Center
Priority to CN201911018970.3A priority Critical patent/CN110841266A/en
Publication of CN110841266A publication Critical patent/CN110841266A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63BAPPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
    • A63B71/00Games or sports accessories not covered in groups A63B1/00 - A63B69/00
    • A63B71/06Indicating or scoring devices for games or players, or for other sports activities
    • A63B71/0619Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
    • A63B2071/0647Visualisation of executed movements

Landscapes

  • Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an auxiliary training system and a method, wherein the system comprises: the augmented reality module is used for acquiring a standard video, simulating a real environment based on the standard video to demonstrate standard actions, and sending a first time axis and first three-dimensional skeleton key points of the standard actions in the demonstration process to the action analysis module; the gesture capturing module is used for capturing a second three-dimensional skeleton key point and a second time axis in the training process and sending the second three-dimensional skeleton key point and the second time axis to the action analyzing module; the action analysis module is used for selecting a first three-dimensional skeleton key point and a second three-dimensional skeleton key point at the same moment according to the first time axis and the second time axis to compare so as to generate guidance information. By projecting the standard action into the real environment, the real scene is effectively restored, the standard posture is provided for the training of a single person in the queue as a reference, and the inconvenience brought by the training of a plurality of persons is avoided; the trainer is accurately assisted by rapidly capturing the training pose and comparing the training pose to a standard pose to give instructional information.

Description

Auxiliary training system and method
Technical Field
The invention relates to the technical field of computer vision, in particular to an auxiliary training system and method.
Background
Currently, in the field of vision technology, most of the visual inspection is also performed by visual inspection. For example, when training a soldier in a queue, multiple persons are usually required to train simultaneously for the standardability and the regularity of the queue and movements. Especially, during the forward training, as the movement is strictly pursued to be orderly, the whole queue needs to be trained together as long as one person does not perfectly in one square matrix, and when the training reaches the later stage, the movement of a specific trainer can influence the regularity of the whole queue, so that the whole queue needs to be trained continuously, and the quality and the progress of the movement training are seriously influenced. However, the accuracy of the visual inspection is poor during the individual training.
At present, no technical scheme for single person training by adopting a computer vision technology is found.
Disclosure of Invention
The present invention provides an assistant training system and method for overcoming the above-mentioned deficiencies in the prior art, and the object is achieved by the following technical solutions.
A first aspect of the invention proposes an assisted training system, the system comprising:
the augmented reality module is used for acquiring a standard video, simulating a real environment based on the standard video to demonstrate standard actions, and sending a first time axis and first three-dimensional skeleton key points of the standard actions to the action analysis module in the demonstration process;
the gesture capturing module is used for capturing a second three-dimensional skeleton key point and a second time axis in the standard action following training process of the user and sending the second three-dimensional skeleton key point and the second time axis to the action analyzing module;
and the action analysis module is used for selecting the first three-dimensional skeleton key point and the second three-dimensional skeleton key point at the same moment according to the first time axis and the second time axis, comparing the first three-dimensional skeleton key point and the second three-dimensional skeleton key point, generating guidance information according to a comparison result and prompting the guidance information.
A second aspect of the present invention provides an assistant training method, including:
acquiring a standard video through an augmented reality module, simulating a real environment based on the standard video to demonstrate standard actions, and sending a first time axis and first three-dimensional skeleton key points of the standard actions in the demonstrating process to an action analysis module;
capturing a second three-dimensional skeleton key point and a second time axis in the training process of the user following the standard action through a gesture capturing module, and sending the second three-dimensional skeleton key point and the second time axis to the action analyzing module;
and selecting a first three-dimensional skeleton key point and a second three-dimensional skeleton key point at the same moment for comparison through the action analysis module according to the first time axis and the second time axis, and generating guidance information and prompting according to a comparison result.
In the embodiment of the application, the standard action is projected into the real environment through the augmented reality module, so that the real scene is effectively restored, the standard queue training posture is provided for single training as a reference, a trainer is personally on the scene, and the training is more intuitive and more specific, and inconvenience caused by simultaneous training of multiple persons is avoided; the training posture of the user is rapidly captured and analyzed through the posture capture module adopting the computer vision technology, the training posture of the user is compared with the standard posture in real time through the action analysis module, real-time guidance information is given according to a comparison result, a trainer is accurately assisted to train alone, and good use experience is brought to the trainer.
Drawings
The accompanying drawings, which are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the invention and not to limit the invention. In the drawings:
FIG. 1 is a schematic diagram illustrating an assisted training system according to an exemplary embodiment of the present invention;
fig. 2 is a flowchart illustrating an embodiment of an assistant training method according to an exemplary embodiment of the present invention.
Detailed Description
Reference will now be made in detail to the exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, like numbers in different drawings represent the same or similar elements unless otherwise indicated. The embodiments described in the following exemplary embodiments do not represent all embodiments consistent with the present invention. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the invention, as detailed in the appended claims.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in this specification and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items.
It is to be understood that although the terms first, second, third, etc. may be used herein to describe various information, these information should not be limited to these terms. These terms are only used to distinguish one type of information from another. For example, first information may also be referred to as second information, and similarly, second information may also be referred to as first information, without departing from the scope of the present invention. The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination", depending on the context.
Fig. 1 is a schematic structural diagram of an assistant training system according to an exemplary embodiment of the present invention, including: the system comprises an augmented reality module, a gesture capture module and an action analysis module;
the augmented reality module is used for acquiring a standard video, simulating a real environment based on the standard video to demonstrate standard actions, and sending a first time axis and first three-dimensional skeleton key points of the standard actions to the action analysis module in the demonstration process;
the gesture capturing module is used for capturing a second three-dimensional skeleton key point and a second time axis in the standard action following training process of the user and sending the second three-dimensional skeleton key point and the second time axis to the action analyzing module;
and the action analysis module is used for selecting the first three-dimensional skeleton key point and the second three-dimensional skeleton key point at the same moment according to the first time axis and the second time axis, comparing the first three-dimensional skeleton key point and the second three-dimensional skeleton key point, generating guidance information according to a comparison result and prompting the guidance information.
In the embodiment, the standard action is projected into the real environment through the augmented reality module, so that the real scene is effectively restored, the standard queue training posture is provided for the training of a single person in the queue as a reference, a trainer is enabled to be personally on the scene, and the training is more intuitive and more specific, and the inconvenience caused by the simultaneous training of multiple persons is avoided; the gesture capture module adopting the computer vision technology is used for rapidly capturing and analyzing the training gesture of the user, the action analysis module is used for comparing the training gesture of the user with the standard gesture in real time, real-time guidance information is given according to a comparison result, a trainer is accurately assisted, and good use experience is brought to the trainer.
In an embodiment, the augmented reality module may be disposed in a pair of head-mounted glasses, so as to be conveniently worn and removed by a user. The head-mounted glasses can adopt a semitransparent holographic image technology, and a user can add various exercise actions and virtual scenes, such as scenes of a square matrix front row, a square matrix middle row, a square matrix side row, a square matrix rear row and the like.
Exemplarily, the enhancement implementation module can send the first time axis and the first three-dimensional skeleton key point of the standard action in the demonstration process to the action analysis module in a wireless mode, so that the weight of the head-wearing glasses is reduced, and the use experience is further improved.
In one embodiment, as shown in fig. 1, the training aid system may further include a camera module including at least two cameras, the two cameras being disposed at different positions around the user, and each camera corresponding to a shooting angle of view. Wherein each camera in the camera module acquires an image at the same frame rate and transmits the acquired image to the gesture capture module.
Wherein each camera may wirelessly communicate the captured image to the gesture capture module.
For example, the camera module may include three cameras, and the three cameras may form a triangular structure in a surrounding manner, so as to capture the action posture of the user, and form a plurality of groups of images of the same object under different viewing angles.
It should be noted that the camera module may further perform human body detection on each frame of acquired image, and when a human body is not detected within a certain time period, send a sleep instruction to each module, so that the auxiliary training system enters a sleep state.
Of course, the training aid system may further include a wake-up module, which may also be disposed in the head-mounted glasses and configured to monitor an external wake-up command, so as to enable the training aid system to enter a wake-up state.
In an embodiment, as further shown in fig. 1, the training assistance system may further include a storage module for acquiring and storing the second three-dimensional skeletal keypoints of the gesture capture module; acquiring and storing images of different visual angles acquired by a camera module at the same time; and acquiring and storing the guide information generated by the action analysis module. The user can further improve the training effect by playing back the relevant information stored in the storage module.
In an embodiment, the gesture capture module may include an image acquisition unit and a gesture extraction unit; the image acquisition unit is used for acquiring images of different visual angles of a user at the same moment and sending the images to the posture extraction unit; the gesture extraction unit is used for obtaining a second three-dimensional skeleton key point according to images with different visual angles, and sending the moment and the second three-dimensional skeleton key point to the action analysis module.
The second time axis comprises a plurality of moments, and each moment corresponds to a group of second three-dimensional bone key points.
For example, the gesture extraction unit may include a two-dimensional gesture extraction unit and a three-dimensional gesture reproduction unit. The two-dimensional posture extraction unit is used for extracting two-dimensional skeleton key points of the user in each frame of visual angle image and sending the two-dimensional skeleton key points to the three-dimensional posture reproduction unit; the three-dimensional posture recurrence unit is used for fitting two-dimensional skeleton key points of images with different visual angles to obtain a second three-dimensional skeleton key point corresponding to the moment, and sending the moment and the second three-dimensional skeleton key point to the action analysis module.
In an embodiment, as further shown in fig. 1, the training assistance system may further include a voice module, where the voice module is configured to receive the guidance information sent by the action analysis module and perform voice prompt.
Wherein, the voice module can be arranged at the ear position of the head-wearing glasses, so as to prompt the training action of the user by voice. The action analysis module can send the guidance information to the voice module in a wireless mode, and because only the augmented reality module and the voice module are arranged in the head-mounted glasses and are interacted with other modules in a wireless mode, the structure is simple, and the operation difficulty of a user is low.
Fig. 2 is a flowchart illustrating an embodiment of an assistant training method according to an exemplary embodiment of the present invention, where the present embodiment is based on the assistant training system illustrated in fig. 1, and as illustrated in fig. 2, the assistant training method includes the following steps:
step 201: the method comprises the steps of obtaining a standard video through an augmented reality module, simulating a real environment based on the standard video to demonstrate standard actions, and sending a first time axis and first three-dimensional skeleton key points of the standard actions to an action analysis module in the demonstrating process.
In an embodiment, in the process of acquiring the standard video by the augmented reality module, the standard video corresponding to the action type carried by the action selection instruction may be acquired by receiving the action selection instruction.
Wherein the user operation enhancement implementation module may download the standard video corresponding to the selected action type. The type of action may include a step, a stride, and the like.
In the invention, the first time axis is composed of different moments in the demonstration process, and each moment corresponds to a first three-dimensional skeleton key point with a group of standard actions. For example, every two moments on the first time axis may be separated by a fixed time period.
It can be understood by those skilled in the art that the first three-dimensional bone key point information of the standard motion may be extracted according to the video image of the standard motion corresponding to each time included on the first time axis in the presentation process, or may be extracted in advance according to the video image of the standard motion corresponding to each time included on the first time axis, which is not limited in this respect.
Step 202: and capturing a second three-dimensional skeleton key point and a second time axis in the standard action following training process of the user through the gesture capturing module, and sending the second three-dimensional skeleton key point and the second time axis to the action analyzing module.
In an embodiment, images of different viewing angles of the user at the same time are acquired from the camera module, and the second three-dimensional bone key point corresponding to the time is acquired according to the images of different viewing angles.
In order to ensure the gesture capture efficiency, the gesture capture module may acquire images at a fixed time interval, and the fixed time interval is set by a user according to the frame rate of the camera.
And aiming at the process of obtaining the second three-dimensional skeleton key point corresponding to the moment according to the images with different visual angles, the second three-dimensional skeleton key point corresponding to the moment can be obtained by extracting the two-dimensional skeleton key point of the user in each frame of visual angle image and fitting the two-dimensional skeleton key points of the images with different visual angles.
The gesture capturing module can input each frame of visual angle image into a trained extraction network model respectively, two-dimensional skeleton key points of each frame of visual angle image are extracted by the extraction network model, then the two-dimensional skeleton key points of different visual angle images and internal and external parameters of a camera in the camera module are input into the trained recurrent network model, and the recurrent network model is used for fitting the two-dimensional skeleton key points of different visual angle images to obtain a final second three-dimensional skeleton key point.
Illustratively, the extraction network model may employ an open position model, and the reproduction network model may employ a 3DPS model.
Those skilled in the art will appreciate that the training of the extracted network model for two-dimensional skeletal key points and the recurrent network model for three-dimensional skeletal key points can be implemented using correlation techniques.
The two-dimensional skeleton key points refer to key points which represent a certain part of a human body by using two-dimensional coordinates, and the three-dimensional skeleton key points refer to key points which represent a certain part of a human body by using three-dimensional coordinates. These key points may be leg joints, arm joints, etc.
It should be noted that before fitting the two-dimensional skeleton key points of the images with different viewing angles, the selection can be performed based on the existing skeleton key points according to the point number requirement of the recurrent network model on the two-dimensional skeleton key points.
Step 203: and selecting a first three-dimensional skeleton key point and a second three-dimensional skeleton key point at the same moment for comparison through the action analysis module according to the first time axis and the second time axis, and generating guidance information according to a comparison result and prompting.
Since the first time axis includes a plurality of moments, each moment corresponds to a set of first three-dimensional bone key points, and similarly, the second time axis also includes a plurality of moments, each moment corresponds to a set of second three-dimensional bone key points, and for the comparison between the training motion and the standard motion, the posture of the standard motion and the posture of the training motion should be compared at the same moment, based on which, the first three-dimensional bone key and the second three-dimensional bone key point at the same moment need to be selected according to the first time axis and the second time axis.
It is noted that the keypoints of each of the first and second three-dimensional skeletal keypoints are represented by three-dimensional coordinates in the world coordinate system.
In one embodiment, in the comparison process after the first three-dimensional skeleton key point and the second three-dimensional skeleton key point are selected, a reference coordinate system is established by taking the centers of two feet of a human body as an origin, the first three-dimensional skeleton key point and the second three-dimensional skeleton key point are respectively converted into relative coordinates under the reference coordinate system, the relative coordinates of the key points belonging to the same part are compared to obtain comparison results of all parts, and then guidance information is generated according to the comparison results.
For example, the motion analysis module may generate the guidance information according to a comparison result of a plurality of consecutive time instants.
In an exemplary scene, taking a male in square matrix training as an example, in the process of demonstrating a standard action by an augmented reality module, playing a two-to-one password in a standard video through a voice module, capturing key points of a leg and a knee and arm wrist of a trainer in real time by a posture capturing module, and sending the captured data and the captured time points to an action analyzing module in real time, and in addition, sending key points of the leg and the knee and the arm and the wrist of the standard action to the action analyzing module by the augmented reality module, so that the action analyzing module can select and compare the key points of the leg and the knee and the arm and the wrist of the trainer at the same time point in real time with the key points of the leg and the knee and the arm and the wrist of the standard action, and the square matrix is neat and consistent in height, so that the height difference between the knee and the standard action knee of the trainer relative to the ground needs to be obtained by comparison, and the height difference between the wrist of the trainer and the standard wrist of the trainer relative to the ground, and guidance information is generated according to the height difference to prompt.
In the embodiment of the application, the standard action is projected into the real environment through the augmented reality module, so that the real scene is effectively restored, the standard queue training posture is provided for the training of a single person in the queue as a reference, a trainer is enabled to be personally on the scene, and the training is more intuitive and more specific, and the inconvenience caused by the simultaneous training of multiple persons is avoided; the gesture capture module adopting the computer vision technology is used for rapidly capturing and analyzing the training gesture of the user, the action analysis module is used for comparing the training gesture of the user with the standard gesture in real time, real-time guidance information is given according to a comparison result, a trainer is accurately assisted, and good use experience is brought to the trainer.
Other embodiments of the invention will be apparent to those skilled in the art from consideration of the specification and practice of the invention disclosed herein. This invention is intended to cover any variations, uses, or adaptations of the invention following, in general, the principles of the invention and including such departures from the present disclosure as come within known or customary practice within the art to which the invention pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the invention being indicated by the following claims.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. An assisted training system, the system comprising:
the augmented reality module is used for acquiring a standard video, simulating a real environment based on the standard video to demonstrate standard actions, and sending a first time axis and first three-dimensional skeleton key points of the standard actions to the action analysis module in the demonstration process;
the gesture capturing module is used for capturing a second three-dimensional skeleton key point and a second time axis in the standard action following training process of the user and sending the second three-dimensional skeleton key point and the second time axis to the action analyzing module;
and the action analysis module is used for selecting the first three-dimensional skeleton key point and the second three-dimensional skeleton key point at the same moment according to the first time axis and the second time axis, comparing the first three-dimensional skeleton key point and the second three-dimensional skeleton key point, generating guidance information according to a comparison result and prompting the guidance information.
2. The system of claim 1, wherein the gesture capture module comprises:
the image acquisition unit is used for acquiring images of different visual angles of the user at the same moment and sending the images to the posture extraction unit;
the posture extraction unit is used for obtaining a second three-dimensional skeleton key point according to images with different visual angles and sending the moment and the second three-dimensional skeleton key point to the action analysis module;
the second time axis comprises a plurality of moments, and each moment corresponds to a group of second three-dimensional bone key points.
3. The system of claim 2, wherein the gesture extraction unit comprises:
the two-dimensional posture extraction unit is used for extracting two-dimensional skeleton key points of the user in each frame of visual angle image and sending the two-dimensional skeleton key points to the three-dimensional posture reproduction unit;
and the three-dimensional posture recurrence unit is used for fitting the two-dimensional skeleton key points of the images with different visual angles to obtain a second three-dimensional skeleton key point corresponding to the moment, and sending the moment and the second three-dimensional skeleton key point to the action analysis module.
4. The system of claim 1, further comprising a camera module comprising at least two cameras disposed at different positions around the user, each camera corresponding to a shooting perspective.
Each camera in the camera module acquires an image at the same frame rate and transmits the acquired image to the gesture capture module.
5. The system of claim 4, further comprising:
the storage module is used for acquiring and storing a second three-dimensional skeleton key point of the gesture capturing module; acquiring and storing images of different visual angles acquired by the camera module at the same time; and acquiring and storing the guide information generated by the action analysis module.
6. The system of claim 1, further comprising:
the voice module is used for receiving the guide information sent by the action analysis module and carrying out voice prompt;
wherein the augmented reality module and the voice module are integrated in the head-mounted glasses.
7. A method of assisted training, the method comprising:
acquiring a standard video through an augmented reality module, simulating a real environment based on the standard video to demonstrate standard actions, and sending a first time axis and first three-dimensional skeleton key points of the standard actions in the demonstrating process to an action analysis module;
capturing a second three-dimensional skeleton key point and a second time axis in the training process of the user following the standard action through a gesture capturing module, and sending the second three-dimensional skeleton key point and the second time axis to the action analyzing module;
and selecting a first three-dimensional skeleton key point and a second three-dimensional skeleton key point at the same moment for comparison through the action analysis module according to the first time axis and the second time axis, and generating guidance information and prompting according to a comparison result.
8. The method of claim 7, wherein obtaining the standard video through the augmented reality module comprises:
receiving an action selection instruction;
and acquiring a standard video corresponding to the action type carried by the action selection instruction.
9. The method of claim 7, wherein capturing a second three-dimensional skeletal keypoint and a second time axis during training of the user to follow standard actions by a gesture capture module comprises:
acquiring images of different visual angles of a user at the same time;
and obtaining a second three-dimensional skeleton key point corresponding to the moment according to the images with different visual angles.
10. The method of claim 9, wherein obtaining the second three-dimensional bone key points corresponding to the time points from the images with different viewing angles comprises:
extracting two-dimensional skeleton key points of the user in each frame of visual angle image;
and fitting the two-dimensional bone key points of the images with different visual angles to obtain a second three-dimensional bone key point corresponding to the moment.
CN201911018970.3A 2019-10-24 2019-10-24 Auxiliary training system and method Pending CN110841266A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911018970.3A CN110841266A (en) 2019-10-24 2019-10-24 Auxiliary training system and method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911018970.3A CN110841266A (en) 2019-10-24 2019-10-24 Auxiliary training system and method

Publications (1)

Publication Number Publication Date
CN110841266A true CN110841266A (en) 2020-02-28

Family

ID=69596896

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911018970.3A Pending CN110841266A (en) 2019-10-24 2019-10-24 Auxiliary training system and method

Country Status (1)

Country Link
CN (1) CN110841266A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112116981A (en) * 2020-09-02 2020-12-22 杭州憶盛医疗科技有限公司 Rehabilitation training method, device and system based on artificial intelligence and storage medium
CN113063411A (en) * 2020-06-29 2021-07-02 河北工业大学 Exoskeleton evaluation system and method of use thereof
CN115346419A (en) * 2022-07-11 2022-11-15 南昌大学 Training auxiliary system based on visible light communication

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678859A (en) * 2012-09-12 2014-03-26 财团法人工业技术研究院 Motion comparison method and motion comparison system
US20140228985A1 (en) * 2013-02-14 2014-08-14 P3 Analytics, Inc. Generation of personalized training regimens from motion capture data
CN109191588A (en) * 2018-08-27 2019-01-11 百度在线网络技术(北京)有限公司 Move teaching method, device, storage medium and electronic equipment
CN109684943A (en) * 2018-12-07 2019-04-26 北京首钢自动化信息技术有限公司 A kind of sportsman's supplemental training data capture method, device and electronic equipment
CN110045823A (en) * 2019-03-12 2019-07-23 北京邮电大学 A kind of action director's method and apparatus based on motion capture

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103678859A (en) * 2012-09-12 2014-03-26 财团法人工业技术研究院 Motion comparison method and motion comparison system
US20140228985A1 (en) * 2013-02-14 2014-08-14 P3 Analytics, Inc. Generation of personalized training regimens from motion capture data
CN109191588A (en) * 2018-08-27 2019-01-11 百度在线网络技术(北京)有限公司 Move teaching method, device, storage medium and electronic equipment
CN109684943A (en) * 2018-12-07 2019-04-26 北京首钢自动化信息技术有限公司 A kind of sportsman's supplemental training data capture method, device and electronic equipment
CN110045823A (en) * 2019-03-12 2019-07-23 北京邮电大学 A kind of action director's method and apparatus based on motion capture

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113063411A (en) * 2020-06-29 2021-07-02 河北工业大学 Exoskeleton evaluation system and method of use thereof
CN112116981A (en) * 2020-09-02 2020-12-22 杭州憶盛医疗科技有限公司 Rehabilitation training method, device and system based on artificial intelligence and storage medium
CN112116981B (en) * 2020-09-02 2023-12-22 杭州憶盛医疗科技有限公司 Rehabilitation training method, device and system based on artificial intelligence and storage medium
CN115346419A (en) * 2022-07-11 2022-11-15 南昌大学 Training auxiliary system based on visible light communication
CN115346419B (en) * 2022-07-11 2023-08-29 南昌大学 Training auxiliary system based on visible light communication

Similar Documents

Publication Publication Date Title
CN104699247B (en) A kind of virtual reality interactive system and method based on machine vision
JP6515813B2 (en) INFORMATION PROCESSING APPARATUS, INFORMATION PROCESSING METHOD, AND PROGRAM
CN109087379B (en) Facial expression migration method and facial expression migration device
CN103578135B (en) The mutual integrated system of stage that virtual image combines with real scene and implementation method
CN110544301A (en) Three-dimensional human body action reconstruction system, method and action training system
US11945125B2 (en) Auxiliary photographing device for dyskinesia analysis, and control method and apparatus for auxiliary photographing device for dyskinesia analysis
CN106652590B (en) Teaching method, teaching identifier and tutoring system
CN110841266A (en) Auxiliary training system and method
CN111696140B (en) Monocular-based three-dimensional gesture tracking method
CN107004279A (en) Natural user interface camera calibrated
CN105929958B (en) A kind of gesture identification method, device and wear-type visual device
CN110211222B (en) AR immersion type tour guide method and device, storage medium and terminal equipment
CN112198959A (en) Virtual reality interaction method, device and system
CN108416832B (en) Media information display method, device and storage medium
CN110717391A (en) Height measuring method, system, device and medium based on video image
JP7078577B2 (en) Operational similarity evaluation device, method and program
JP2005198818A (en) Learning supporting system for bodily motion and learning supporting method
CN109545003A (en) A kind of display methods, device, terminal device and storage medium
CN111539299B (en) Human motion capturing method, device, medium and equipment based on rigid body
KR20200098970A (en) Smart -learning device and method based on motion recognition
CN104933278B (en) A kind of multi-modal interaction method and system for disfluency rehabilitation training
CN109426336A (en) A kind of virtual reality auxiliary type selecting equipment
CN112288876A (en) Long-distance AR identification server and system
CN111881807A (en) VR conference control system and method based on face modeling and expression tracking
CN114832349B (en) Yuanzhou swimming teaching auxiliary system and use method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination