CN106125909A - A kind of motion capture system for training - Google Patents
A kind of motion capture system for training Download PDFInfo
- Publication number
- CN106125909A CN106125909A CN201610423116.5A CN201610423116A CN106125909A CN 106125909 A CN106125909 A CN 106125909A CN 201610423116 A CN201610423116 A CN 201610423116A CN 106125909 A CN106125909 A CN 106125909A
- Authority
- CN
- China
- Prior art keywords
- training
- motion capture
- real
- order
- unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T13/00—Animation
- G06T13/20—3D [Three Dimensional] animation
- G06T13/40—3D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Rehabilitation Tools (AREA)
Abstract
The invention discloses a kind of motion capture system for training, motion capture equipment is used to be prone to dress the precision height gathering data, avoid the process posting motion capture sensor one by one, and without using picture pick-up device recording image, the requirement of environment is low to external world, utilizing the data modeling that motion capture equipment is gathered by control unit to generate 3D image in real time, efficiency is high;Training unit is used in real time the action data of trainer accurately to be mated, and feedback training miscue, improve remote teaching training and the teaching efficiency of wireless teaching training, the accuracy of training action can be got information about by display unit trainer, the action accuracy of trainer can be effectively raised.
Description
Technical field
The present invention relates to attitude detection field, particularly relate to a kind of motion capture system for training.
Background technology
Current action training teaching has person to person's direct teaching, also has and is imparted knowledge to students by the 2D image data such as video, image,
The former, place suffer from higher requirement free time respective to person to person;Although the latter overcomes the requirement in time, place
Constraint, but the training of student cannot feed back and obtain the guidance of coach in real time in time, and also 2D image data exists
The shortcoming inborn action reduction distortion.
Existing motion capture is mainly used in space flight, aviation and 3D film making etc., such as: the navigation system of unmanned plane is just
Have employed motion capture technology.In existing 3D film producing process, commonly used photoelectric sensor gathers the motion of personnel to be measured
Track, by the reflective spot of the photographic head pickup light electric transducer of multiple different angles, the requirement to environment is high, the shortcoming existed
Having: be easily subject to backlight impact during shooting, cost is high, has supported, but owing to image calculation is high to hardware requirement, and
There is also the shortcoming that time delay is big at present, narrow being difficult to of range of application is popularized.Traditional motion capture number of sensors is many and scattered, makes
Used time needs to be adhered on the body of personnel to be measured by motion capture sensor one by one, and the position that each motion capture sensor is corresponding
Put fixing, if the position patch mistake of motion capture sensor directly affects the data of collection, complex operation, and current motion capture
Commonly used three axles of sensor or six axles gather data, and precision is low.Motion capture at present is not applied to action instruction as can be seen here
Practice in teaching.
Summary of the invention
For the problems referred to above of existing action training teaching presence, now providing one to aim at can Real-time Collection training
Data are also accurately mated by the action of personnel, it is ensured that trainer improves the motion capture for training of action accuracy
System.
Concrete technical scheme is as follows:
A kind of motion capture system for training, including:
One motion capture equipment, described motion capture equipment dresses the trainer of described motion capture equipment in order to gather
Action data, described motion capture equipment includes: a plurality of collecting units, and a plurality of described collecting units are respectively arranged on described
A plurality of predeterminated positions of motion capture equipment, each described collecting unit respectively corresponding unique station location marker, described in adopt
Collection unit is in order to the described action data of the corresponding described predeterminated position of Real-time Collection;
One control unit, is connected with described motion capture equipment, in order to the human body preset according to described action data foundation
Motion model sets up the training 3D attitude of human body, and generates training real-time imaging according to described training 3D attitude;
One memory element, in order to store standard real-time imaging;
One training unit, connects described control unit, it is provided that one group of normal data, and each described normal data coupling has only
The training mark of one, described training mark and described station location marker one_to_one corresponding, described training unit is in order to by described criterion numeral
Real-time is carried out according to preset rules according to the described action data of the described station location marker corresponding with corresponding described training mark
Joining, and generate corresponding training result in real time, described real-time training result is in order to represent the real-time shadow of described training of trainer
As the comparing result with described standard real-time imaging;
One display unit, connects described control unit, described memory element and described training unit, it is provided that one first respectively
Display module and one second indicating template, described first indicating template is in order to show described standard real-time imaging, and described second shows
Show module in order to show described training real-time imaging, described display unit is also in order to show described real-time training result.
Preferably, described motion capture equipment also includes:
One output unit, in order to export described action data;
One processing unit, connects described output unit and a plurality of described collecting unit respectively, in order to receive a plurality of institute
State a plurality of described action data that collecting unit sends, a plurality of described action datas are sent to described transmitting element.
Preferably, described motion capture equipment and described control unit wireless connections.
Preferably, described modelling of human body motion is multi-rigid model.
Preferably, including: described motion capture equipment matches with described multi-rigid model, the number of described rigid body and institute
The number stating collecting unit is corresponding, described rigid body and described collecting unit one_to_one corresponding.
Preferably, the center of each described rigid body is formed at the described predeterminated position of described motion capture equipment, institute
State collecting unit in order to gather the described action data of the described center of described rigid body.
Preferably, a carrier coordinate system, the central point of described carrier coordinate system are formed respectively in each described collecting unit
Center for described collecting unit;
Described collecting unit includes:
One three axis accelerometer, in order to the Real-time Collection described rigid body corresponding with described collecting unit at described carrier coordinate
Rotation 3-axis acceleration under Xi;
One three-axis gyroscope, in order to the Real-time Collection described rigid body corresponding with described collecting unit in described carrier coordinate system
Under rotation three axis angular rate;
One or three axle magnetometers, in order to the Real-time Collection described rigid body corresponding with described collecting unit in described carrier coordinate system
Under three axle magnetic force component;
One control module, connects described three axis accelerometer, described three-axis gyroscope and described three axle magnetometers respectively, uses
To generate the quaternary under world coordinate system according to described 3-axis acceleration, described three axis angular rates and described three axle magnetic force component
Number, described action data includes described quaternary number and the described station location marker corresponding with described collecting unit.
Preferably, described control unit includes:
One MBM, depends in order to the described quaternary number according to each described collecting unit and corresponding described station location marker
The described 3D attitude of human body is set up according to described multi-rigid model;
One synthesis module, connects described MBM, real-time in order to real-time described 3D attitude is synthesized described training
Image.
Preferably, described preset rules is institute's rheme that the described normal data of judgement is corresponding with corresponding described training mark
Put the deviation between the described action data of mark whether in default threshold range, if described deviation is at described threshold range
In, then the match is successful;If described deviation is not in described threshold range, then it fails to match.
Preferably, described normal data is the first Eulerian angles;
Described training unit is by the institute of described station location marker corresponding with corresponding described training mark for described normal data
State action data and be converted to the second Eulerian angles, generate phase according to the deviation between described first Eulerian angles with described second Eulerian angles
The described real-time training result answered.
The beneficial effect of technique scheme:
In the technical program, using motion capture equipment to be prone to dress, to gather the precision of data high, it is to avoid posts one by one
The process of motion capture sensor, and without using picture pick-up device recording image, the requirement of environment is low to external world, utilizes and controls list
The data modeling that motion capture equipment is gathered by unit is to generate 3D image in real time, and efficiency is high;Use training unit in real time to training
The action data of personnel accurately mates, and feedback training miscue, improves remote teaching training and wireless teaching instruction
The teaching efficiency practiced, can be got information about the accuracy of training action, can effectively raise by display unit trainer
The action accuracy of trainer.
Accompanying drawing explanation
Fig. 1 is the module map of a kind of embodiment of the motion capture system for training of the present invention;
Fig. 2 is the internal module figure of a kind of embodiment of collecting unit of the present invention;
Fig. 3 is the internal module figure of a kind of embodiment of control unit of the present invention.
Detailed description of the invention
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Describe, it is clear that described embodiment is only a part of embodiment of the present invention rather than whole embodiments wholely.Based on
Embodiment in the present invention, those of ordinary skill in the art obtained on the premise of not making creative work all its
His embodiment, broadly falls into the scope of protection of the invention.
It should be noted that in the case of not conflicting, the embodiment in the present invention and the feature in embodiment can phases
Combination mutually.
The invention will be further described with specific embodiment below in conjunction with the accompanying drawings, but not as limiting to the invention.
As it is shown in figure 1, a kind of motion capture system for training, including:
One motion capture equipment 8, motion capture equipment 8 is in order to gather the dynamic of the trainer of wearing motion capture equipment 8
Making data, motion capture equipment 8 includes: a plurality of collecting units 1, and a plurality of collecting units 1 are respectively arranged on motion capture equipment 8
A plurality of predeterminated positions, each collecting unit 1 respectively corresponding unique station location marker, collecting unit 1 is in order to Real-time Collection
The action data of corresponding predeterminated position;
One control unit 5, is connected with motion capture equipment 8, in order to the human motion mould preset according to action data foundation
The training 3D attitude of human body set up by type, and generates training real-time imaging according to training 3D attitude;
One memory element 6, in order to store standard real-time imaging;
One training unit 4, connects control unit 5, it is provided that one group of normal data, and each normal data coupling has unique instruction
Practicing mark, training mark and station location marker one_to_one corresponding, training unit 4 is in order to by corresponding with corresponding training mark for normal data
The action data of station location marker carry out real-time matching according to preset rules, and generate corresponding training result in real time, instruct in real time
Practice the comparing result of result training real-time imaging with standard real-time imaging in order to represent trainer;
One display unit 7, connects control unit 5, memory element 6 and training unit 4, it is provided that one first display module respectively
With one second indicating template, the first indicating template is in order to show standard real-time imaging, and the second display module is in order to show that training is real
Time image, display unit 7 is also in order to show real-time training result.
Further, normal data represents that standard operation, action data represent user action.Normal data can be by coach
Dressing described motion capture equipment in advance, the typing carrying out standard operation obtains.Training result is used for representing trainer in real time
The comparing result of action and standard, can be that the malfunction of trainer is pointed out.
In the present embodiment, use motion capture equipment 8 to be prone to dress to gather the precision of data high, it is to avoid posts one by one
The process of motion capture sensor, and without using picture pick-up device recording image, the requirement of environment is low to external world, utilizes and controls list
The data modeling that motion capture equipment 8 is gathered by unit 5 is to generate 3D image in real time, and efficiency is high;Use training unit 4 in real time to instruction
The action data practicing personnel accurately mates, and feedback training miscue, improves remote teaching training and wireless teaching
The teaching efficiency of training, the accuracy that can get information about training action by display unit 7 trainer, can effectively improve
The action accuracy of trainer.
In a preferred embodiment, motion capture equipment 8 also includes:
One output unit 3, in order to output action data;
One processing unit 2, connects output unit 3 and a plurality of collecting unit 1, respectively in order to receive a plurality of collecting unit
A plurality of action datas are sent to transmitting element by the 1 a plurality of action datas sent.
Further, motion capture equipment 8 and control unit 5 wireless connections.
Output unit 3 uses wireless module, control unit 5 to use wireless communication mode and output unit 3 to carry out action number
According to transmission.
Further, wireless module can use 2.4G module, and its frequency range is between 2.400GHz~2.4835GHz,
2.4G module has low cost, the advantages such as efficiency high-low voltage, volume are little.
In the present embodiment, processing unit 2 and output unit 3 may be contained within motion capture equipment 8, to realize data
Wirelessly send to control unit 5.
In a preferred embodiment, modelling of human body motion is multi-rigid model.Multi-rigid model includes multiple rigid body.With many
As a example by rigid model includes 16 rigid bodies, 16 rigid bodies include: head rigid body, upper trunk rigid body, lower trunk rigid body, pelvis are firm
Body, left upper arm rigid body, left forearm rigid body, left hand rigid body, left thigh rigid body, left leg rigid body, left foot rigid body, right upper arm rigid body,
Right forearm rigid body, right hand rigid body, right thigh rigid body, right leg rigid body and right crus of diaphragm rigid body.
In the present embodiment, multi-rigid model is that every section of limbs of human body are seen as a rigid body, i.e. internal any position
Put the object not producing relative deformation.Rigid body is at the volley or after stress effect, shapes and sizes are constant, and interior point
The object of invariant position relatively.Definitely rigid body is the most non-existent, and a kind of ideal model, because any object is being subject to
After power effect, deform the most more or less, if the degree of deformation is the most small for the physical dimension of object own,
During research object of which movement, deformation is just negligible.
In a preferred embodiment, including: motion capture equipment 8 matches with multi-rigid model, the number of rigid body with adopt
The number of collection unit 1 is corresponding, rigid body and collecting unit 1 one_to_one corresponding.
Further, the center of each rigid body is formed at the predeterminated position of motion capture equipment 8, and collecting unit 1 is used
To gather the action data of the center of rigid body.
In the present embodiment, collecting unit 1 is utilized to detect the action data of corresponding rigid body in real time, then that action data is real
Time be sent to the processing unit 2 that is arranged on motion capture equipment 8, the data that all collecting units 1 are gathered by processing unit 2 are received
Collection, sends data to control unit 5 by output unit 3, and control unit 5 can use mobile terminal, such as the visitor of Android system
Family end.According to the construction features of multi-rigid model, the connection node of rigid body can be divided into child node, father node and root node, with
As a example by motion capture equipment 8 is clothes, root node is positioned at waist, and the absolute location information of child node is by the rotation of father node certainly
Fixed.
As in figure 2 it is shown, in a preferred embodiment, forming a carrier coordinate system respectively in each collecting unit 1, carrier is sat
The central point that origin is collecting unit 1 of mark system;
Collecting unit 1 includes:
One three axis accelerometer 11, in order to the rotation under carrier coordinate system of the Real-time Collection rigid body corresponding with collecting unit 1
Turn 3-axis acceleration;
One three-axis gyroscope 12, in order to the rotation under carrier coordinate system of the Real-time Collection rigid body corresponding with collecting unit 1
Three axis angular rates;
One or three axle magnetometers 13, in order to the Real-time Collection rigid body corresponding with collecting unit 1 three axles under carrier coordinate system
Magnetic force component;
One control module 14, connects three axis accelerometer 11, three-axis gyroscope 12 and three axle magnetometers 13, respectively in order to root
Generating the quaternary number under world coordinate system according to 3-axis acceleration, three axis angular rates and three axle magnetic force component, action data includes
Quaternary number and the station location marker corresponding with collecting unit 1.
Further, carrier coordinate system is the coordinate system of collecting unit 1 self.
In the present embodiment, collecting unit 1 is adopted by three axis accelerometer 11, three-axis gyroscope 12 and three axle magnetometers 13
Collect nine number of axle evidences and provide the precision of collection action, each collecting unit 1 is each equipped with a control module 14, by controlling mould
Nine number of axle that three axis accelerometer 11, three-axis gyroscope 12 and three axle magnetometers 13 are gathered by block 14, according to being converted to quaternary number, subtract
Lack the computing of control unit 5 and run burden, having improve control unit 5 and generate the speed of real-time imaging.
As it is shown on figure 3, in a preferred embodiment, control unit 5 includes:
One MBM 51, in order to the quaternary number according to each collecting unit 1 and corresponding station location marker foundation multi-rigid body
The 3D attitude of human body set up by model;
One synthesis module 52, connects MBM 51, in order to real-time 3D attitude is synthesized training real-time imaging.
In the present embodiment, owing to control unit 5 is to be modeled action data under world coordinate system, utilize modeling
Module 51 sets up the 3D appearance of human body according to the station location marker of each collecting unit 1 according to multi-rigid model and corresponding quaternary number
State, the 3D attitude generated by synthesis module 52 synthesizes real-time imaging.
In a preferred embodiment, preset rules is the station location marker that criterion data are corresponding with corresponding training mark
Action data between deviation whether in default threshold range, if deviation is in threshold range, then the match is successful;If partially
Difference is not in threshold range, then it fails to match.Further, threshold value can be multiple, to represent degree-of-difficulty factor in various degree,
Corresponding threshold value can be selected according to user's request.
In the present embodiment, when the deviation between action data with corresponding normal data is beyond threshold value, represent training
The action of personnel is inaccurate, and motion capture system can be passed through verbal cue, text prompt or pass through to distinguish in display unit 7
The color of nonstandard action and the standard operation corresponding limbs of correspondence respectively, to point out trainer's action nonstandard, it is simple to
Trainer corrects in time.
In a preferred embodiment, normal data is the first Eulerian angles;
The action data of station location marker corresponding with corresponding training mark for normal data is converted to second by training unit 4
Eulerian angles, generate corresponding training result in real time according to the deviation between the first Eulerian angles with the second Eulerian angles.
In the present embodiment, the first Eulerian angles and the second Eulerian angles are under world coordinate system, and Eulerian angles can represent three
The rotation information of axle, simplifies comparative result, improves the processing speed for the motion capture system trained.
In a preferred embodiment, motion capture equipment 8 includes: headgear, jacket, trousers, glove and footwear.
Further, motion capture equipment 8 comprises the steps that medicated cap, integrated clothes, footwear and glove.
The standard operation data of coach can be utilized motion capture equipment 8 to be recorded as standard by motion capture system in advance in advance
Real-time imaging, or real-time the standard operation of coach is transferred to line server, then trainer puts on motion capture
Equipment 8 carries out real time contrast with the action of coach, and its principle is each action that real time contrast trains between trainer
The attitude information of each section of limbs coincide the most within the specific limits, then can be by the prompting of different types of information in display
On unit 7.Owing to being extremely difficult to the attitude information (action training the action with trainer on each section of limbs under truth
Data and normal data) fit like a glove, therefore use the mode of threshold value that different difficulty is set in the process of attitude information comparison
Interval range, attitude information is by around X, Y, Z tri-rotation information of shaft angle degree, therefore uses Eulerian angles rotation on three axles
The deviation range of transfering the letter breath draws a circle to approve comparison successful ranges.
Although there being at present the such technology of 3D motion capture, but there is no the process that action contrasts.Especially in Yoga, too
In polar motion, usually there is the image data of coach as action learning template, but have no idea the action of student as instead
Feedback, and compares with coach's action, the present invention can action difference between real time contrast student and coach, and feedback action is wrong
By mistake, the training effectiveness of student can be improved.
The foregoing is only preferred embodiment of the present invention, not thereby limit embodiments of the present invention and protection model
Enclose, to those skilled in the art, it should can appreciate that done by all utilization description of the invention and diagramatic content
Scheme obtained by equivalent and obvious change, all should be included in protection scope of the present invention.
Claims (10)
1. the motion capture system for training, it is characterised in that including:
One motion capture equipment, described motion capture equipment is in order to gather the dynamic of the trainer that dresses described motion capture equipment
Making data, described motion capture equipment includes: a plurality of collecting units, a plurality of described collecting units be respectively arranged on described dynamically
A plurality of predeterminated positions of seizure equipment, the most corresponding unique station location marker of each described collecting unit, described collection list
Unit is in order to the described action data of the corresponding described predeterminated position of Real-time Collection;
One control unit, is connected with described motion capture equipment, in order to the human motion preset according to described action data foundation
The training 3D attitude of human body set up by model, and generates training real-time imaging according to described training 3D attitude;
One memory element, in order to store standard real-time imaging;
One training unit, connects described control unit, it is provided that one group of normal data, and each described normal data coupling has unique
Training mark, described training mark with described station location marker one_to_one corresponding, described training unit in order to by described normal data with
The described action data of the described station location marker that corresponding described training mark is corresponding carries out real-time matching according to preset rules, and
Generating corresponding training result in real time, described real-time training result is in order to represent described training real-time imaging and the institute of trainer
State the comparing result of standard real-time imaging;
One display unit, connects described control unit, described memory element and described training unit respectively, it is provided that one first display
Module and one second indicating template, described first indicating template is in order to show described standard real-time imaging, and described second shows mould
Block is in order to show described training real-time imaging, and described display unit is also in order to show described real-time training result.
2. the motion capture system for training as claimed in claim 1, it is characterised in that described motion capture equipment also wraps
Include:
One output unit, in order to export described action data;
One processing unit, connects described output unit and a plurality of described collecting unit respectively, in order to receive a plurality of described in adopt
A plurality of described action datas are sent to described transmitting element by a plurality of described action data that collection unit sends.
3. the motion capture system for training as claimed in claim 1, it is characterised in that described motion capture equipment and institute
State control unit wireless connections.
4. the motion capture system for training as claimed in claim 1, it is characterised in that described modelling of human body motion is many
Rigid model.
5. the motion capture system for training as claimed in claim 4, it is characterised in that including: described motion capture sets
Standby matching with described multi-rigid model, the number of described rigid body is corresponding with the number of described collecting unit, described rigid body and
Described collecting unit one_to_one corresponding.
6. the motion capture system for training as claimed in claim 5, it is characterised in that the centre bit of each described rigid body
Putting the described predeterminated position being formed at described motion capture equipment, described collecting unit is in order to gather the described center of described rigid body
The described action data of position.
7. the motion capture system for training as claimed in claim 6, it is characterised in that divide in each described collecting unit
Not forming a carrier coordinate system, the central point of described carrier coordinate system is the center of described collecting unit;
Described collecting unit includes:
One three axis accelerometer, in order to the Real-time Collection described rigid body corresponding with described collecting unit under described carrier coordinate system
Rotation 3-axis acceleration;
One three-axis gyroscope, in order to the Real-time Collection described rigid body corresponding with described collecting unit under described carrier coordinate system
Rotate three axis angular rates;
One or three axle magnetometers, in order to the Real-time Collection described rigid body corresponding with described collecting unit under described carrier coordinate system
Three axle magnetic force component;
One control module, connects described three axis accelerometer, described three-axis gyroscope and described three axle magnetometers, respectively in order to root
The quaternary number under world coordinate system is generated according to described 3-axis acceleration, described three axis angular rates and described three axle magnetic force component,
Described action data includes described quaternary number and the described station location marker corresponding with described collecting unit.
8. the motion capture system for training as claimed in claim 7, it is characterised in that described control unit includes:
One MBM, in order to the described quaternary number according to each described collecting unit and corresponding described station location marker according to institute
State multi-rigid model and set up the described 3D attitude of human body;
One synthesis module, connects described MBM, in order to real-time described 3D attitude is synthesized described training real-time imaging.
9. the motion capture system for training as claimed in claim 1, it is characterised in that described preset rules is for judging
Whether state the deviation between the described action data of the normal data described station location marker corresponding with corresponding described training mark
In default threshold range, if described deviation is in described threshold range, then the match is successful;If described deviation is not at described threshold
In the range of value, then it fails to match.
10. the motion capture system for training as claimed in claim 9, it is characterised in that described normal data is first
Eulerian angles;
Described training unit is described dynamic by described station location marker corresponding with corresponding described training mark for described normal data
Be converted to the second Eulerian angles as data, generate corresponding according to the deviation between described first Eulerian angles with described second Eulerian angles
Described real-time training result.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610423116.5A CN106125909A (en) | 2016-06-14 | 2016-06-14 | A kind of motion capture system for training |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610423116.5A CN106125909A (en) | 2016-06-14 | 2016-06-14 | A kind of motion capture system for training |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106125909A true CN106125909A (en) | 2016-11-16 |
Family
ID=57270893
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610423116.5A Pending CN106125909A (en) | 2016-06-14 | 2016-06-14 | A kind of motion capture system for training |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106125909A (en) |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107293162A (en) * | 2017-07-31 | 2017-10-24 | 广东欧珀移动通信有限公司 | Move teaching auxiliary and device, terminal device |
CN107920203A (en) * | 2017-11-23 | 2018-04-17 | 乐蜜有限公司 | Image-pickup method, device and electronic equipment |
CN108230794A (en) * | 2018-01-24 | 2018-06-29 | 广州师道科技有限公司 | Classroom interactive teaching system |
CN108339257A (en) * | 2018-03-05 | 2018-07-31 | 安徽传质信息科技有限公司 | A kind of body movement comprehensive training device |
CN108635808A (en) * | 2018-05-14 | 2018-10-12 | 西安医学院 | A kind of Yoga instructional device |
CN109453498A (en) * | 2018-10-23 | 2019-03-12 | 快快乐动(北京)网络科技有限公司 | A kind of trained auxiliary system and method |
CN109658777A (en) * | 2018-12-29 | 2019-04-19 | 陕西师范大学 | Method and system for the teaching control of limb action class |
CN112382152A (en) * | 2020-11-26 | 2021-02-19 | 中国人民解放军陆军军医大学第一附属医院 | Intelligent teaching auxiliary system |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102243687A (en) * | 2011-04-22 | 2011-11-16 | 安徽寰智信息科技股份有限公司 | Physical education teaching auxiliary system based on motion identification technology and implementation method of physical education teaching auxiliary system |
CN102467749A (en) * | 2010-11-10 | 2012-05-23 | 上海日浦信息技术有限公司 | Three-dimensional virtual human body movement generation method based on key frames and spatiotemporal restrictions |
CN203825855U (en) * | 2014-04-17 | 2014-09-10 | 国网上海市电力公司 | Hot-line work simulation training system based on three-dimensional kinect camera |
CN104197987A (en) * | 2014-09-01 | 2014-12-10 | 北京诺亦腾科技有限公司 | Combined-type motion capturing system |
CN104267815A (en) * | 2014-09-25 | 2015-01-07 | 黑龙江节点动画有限公司 | Motion capturing system and method based on inertia sensor technology |
CN104700433A (en) * | 2015-03-24 | 2015-06-10 | 中国人民解放军国防科学技术大学 | Vision-based real-time general movement capturing method and system for human body |
CN204500708U (en) * | 2015-03-10 | 2015-07-29 | 深圳市康宁医院 | A kind of sense system instrument for training |
-
2016
- 2016-06-14 CN CN201610423116.5A patent/CN106125909A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102467749A (en) * | 2010-11-10 | 2012-05-23 | 上海日浦信息技术有限公司 | Three-dimensional virtual human body movement generation method based on key frames and spatiotemporal restrictions |
CN102243687A (en) * | 2011-04-22 | 2011-11-16 | 安徽寰智信息科技股份有限公司 | Physical education teaching auxiliary system based on motion identification technology and implementation method of physical education teaching auxiliary system |
CN203825855U (en) * | 2014-04-17 | 2014-09-10 | 国网上海市电力公司 | Hot-line work simulation training system based on three-dimensional kinect camera |
CN104197987A (en) * | 2014-09-01 | 2014-12-10 | 北京诺亦腾科技有限公司 | Combined-type motion capturing system |
CN104267815A (en) * | 2014-09-25 | 2015-01-07 | 黑龙江节点动画有限公司 | Motion capturing system and method based on inertia sensor technology |
CN204500708U (en) * | 2015-03-10 | 2015-07-29 | 深圳市康宁医院 | A kind of sense system instrument for training |
CN104700433A (en) * | 2015-03-24 | 2015-06-10 | 中国人民解放军国防科学技术大学 | Vision-based real-time general movement capturing method and system for human body |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107293162A (en) * | 2017-07-31 | 2017-10-24 | 广东欧珀移动通信有限公司 | Move teaching auxiliary and device, terminal device |
CN107920203A (en) * | 2017-11-23 | 2018-04-17 | 乐蜜有限公司 | Image-pickup method, device and electronic equipment |
WO2019100756A1 (en) * | 2017-11-23 | 2019-05-31 | 乐蜜有限公司 | Image acquisition method and apparatus, and electronic device |
CN108230794A (en) * | 2018-01-24 | 2018-06-29 | 广州师道科技有限公司 | Classroom interactive teaching system |
CN108339257A (en) * | 2018-03-05 | 2018-07-31 | 安徽传质信息科技有限公司 | A kind of body movement comprehensive training device |
CN108635808A (en) * | 2018-05-14 | 2018-10-12 | 西安医学院 | A kind of Yoga instructional device |
CN109453498A (en) * | 2018-10-23 | 2019-03-12 | 快快乐动(北京)网络科技有限公司 | A kind of trained auxiliary system and method |
CN109658777A (en) * | 2018-12-29 | 2019-04-19 | 陕西师范大学 | Method and system for the teaching control of limb action class |
CN112382152A (en) * | 2020-11-26 | 2021-02-19 | 中国人民解放军陆军军医大学第一附属医院 | Intelligent teaching auxiliary system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106125909A (en) | A kind of motion capture system for training | |
US11772266B2 (en) | Systems, devices, articles, and methods for using trained robots | |
CN104699247B (en) | A kind of virtual reality interactive system and method based on machine vision | |
CN104898828B (en) | Using the body feeling interaction method of body feeling interaction system | |
CN107330967B (en) | Rider motion posture capturing and three-dimensional reconstruction system based on inertial sensing technology | |
CN108854034B (en) | Cerebral apoplexy rehabilitation training system based on virtual reality and inertial motion capture | |
CN106125908A (en) | A kind of motion capture calibration system | |
CN101579238B (en) | Human motion capture three dimensional playback system and method thereof | |
CN103759739B (en) | A kind of multimode motion measurement and analytic system | |
CN107754225A (en) | A kind of intelligent body-building coaching system | |
CN105512621A (en) | Kinect-based badminton motion guidance system | |
CN203405772U (en) | Immersion type virtual reality system based on movement capture | |
CN104407701A (en) | Individual-oriented clustering virtual reality interactive system | |
CN104197987A (en) | Combined-type motion capturing system | |
CN104898829A (en) | Somatosensory interaction system | |
CN104147770A (en) | Inertial-sensor-based wearable hemiplegia rehabilitation apparatus and strap-down attitude algorithm | |
CN106166376A (en) | Simplify taijiquan in 24 forms comprehensive training system | |
CN106569591A (en) | Tracking method and system based on computer vision tracking and sensor tracking | |
CN105962879A (en) | Pose control system and control method of capsule endoscope and capsule endoscope | |
CN104898827A (en) | Somatosensory interaction method applying somatosensory interaction system | |
CN109079794A (en) | It is a kind of followed based on human body attitude robot control and teaching method | |
CN103295011A (en) | Information processing apparatus, information processing method and computer program | |
CN109846487A (en) | Thigh measuring method for athletic posture and device based on MIMU/sEMG fusion | |
CN109344922A (en) | A kind of dance movement evaluating system having motion-captured function | |
WO2022227664A1 (en) | Robot posture control method, robot, storage medium and computer program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161116 |
|
RJ01 | Rejection of invention patent application after publication |