CN109675264A - A kind of general limbs training system and method based on Kinect - Google Patents
A kind of general limbs training system and method based on Kinect Download PDFInfo
- Publication number
- CN109675264A CN109675264A CN201810913701.2A CN201810913701A CN109675264A CN 109675264 A CN109675264 A CN 109675264A CN 201810913701 A CN201810913701 A CN 201810913701A CN 109675264 A CN109675264 A CN 109675264A
- Authority
- CN
- China
- Prior art keywords
- coach
- virtual portrait
- movement
- training
- trainer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B24/00—Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
- A63B24/0062—Monitoring athletic performances, e.g. for determining the work of a user on an exercise apparatus, the completed jogging or cycling distance
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B71/00—Games or sports accessories not covered in groups A63B1/00 - A63B69/00
- A63B71/06—Indicating or scoring devices for games or players, or for other sports activities
- A63B71/0619—Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
- A63B71/0622—Visual, audio or audio-visual systems for entertaining, instructing or motivating the user
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B71/00—Games or sports accessories not covered in groups A63B1/00 - A63B69/00
- A63B71/06—Indicating or scoring devices for games or players, or for other sports activities
- A63B71/0619—Displays, user interfaces and indicating devices, specially adapted for sport equipment, e.g. display mounted on treadmills
- A63B2071/0647—Visualisation of executed movements
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Physical Education & Sports Medicine (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a kind of general limbs training system and method based on Kinect, wherein system includes coach end, server, training end and mobile terminal, the coach end connects training end by server, and the coach end, which acts for recording and action message is converted to data text, is uploaded to server;The trained end obtains the movement of trainer and compares itself movement and from server downloading data text and be parsed into the matching result of corresponding movement;The mobile terminal connects training end, the playback progress for the movement of controlled training end.The present invention passes through coach end, flexibly handle the movement of a variety of different fields and type, when wanting the training action of practice without trainer in system, coach need to be only contacted to record, coach uploads onto the server after recording movement for trainer's downloading use, it is that other training actions do secondary development without software developer, greatly simplifies and expand process, shorten expansion time, reduce development cost.
Description
Technical field
The present invention relates to medical rehabilitation technical field more particularly to a kind of general limbs training system based on Kinect and
Method.
Background technique
Kinect due to the skeleton data that it supports to import dynamic capture, while having supported image since 2010 appear on the market
The multiple functions such as identification, speech recognition, language in-put and liked by numerous developers.Official also provides the fortune included
The tools such as row driving, program development interface, convenient installation file and complete exploitation handbook, developer using mainstream height
Grade programming language can develop body-sensing system easily under visual studio development platform and support the man-machine friendship of nature
Mutual application.Kinect for Windows SDK also attached a possibility that a variety of, can be applied to improve such as medical treatment and education
Etc. every field various social concerns.
Currently, having the application much based on Kinect somatosensory technological development in the market, such as based on Kinect exploitation
Ergonomics assessment system can help people to complete the analysis of ergonomics;Virtual Assembling Technology based on Kinect, can be with
Worker is allowed to carry out the simulated operation of working situation.
Common training or rehabilitation system can only be trained for a kind of specific or a series of actions, the effect of software
The disadvantages of excessively single, the scope of application is narrow, and development process multiplicity is high causes such software to be difficult to be promoted.
Summary of the invention
The purpose of the present invention is to provide a kind of general limbs training system and method based on Kinect passes through newly-increased religion
Practice end, flexibly handle the movement of a variety of different fields and type, and increases other training actions without secondary development, greatly letter
Expansion process is changed.
To achieve the above object, technical scheme is as follows:
A kind of general limbs training system based on Kinect, including coach end, server, training end and mobile terminal,
The coach end connects training end by server, and the coach end acts for recording and action message is converted to data text
Originally it is uploaded to server;The trained end obtains the movement of trainer and compares itself movement and from server downloading data text
And it is parsed into the matching result of corresponding movement;The mobile terminal connects training end, for broadcasting for controlled training end movement
Degree of putting into.
In above scheme, the coach end includes that the first Kinect sensor, action database, input module and first are aobvious
Display screen, first Kinect sensor and input module connecting moves database, the first display screen connect the first Kinect and pass
Sensor, first Kinect sensor are used to acquire the limb action of personage and bind virtual portrait model, action database
For storing the limb action of Kinect sensor recording and being converted into data text, input module is for record and limb action
The Data Filename and action description to match, the first display screen is for showing virtual portrait model.
In above scheme, the trained end include data analysis module, the second Kinect sensor, movement matching module and
Second display screen, the data analysis module connect server, and movement matching module is separately connected data analysis module and second
Kinect sensor, second display screen are separately connected data analysis module, movement matching module and the second Kinect sensor, institute
Data analysis module is stated for the data text downloaded from server to be resolved to action data reconvert at acting and be mapped in
Virtual portrait model A, the second Kinect sensor are used to acquire the action message of trainer and are mapped in virtual portrait Model B
On, movement Matching Model is for determining the matching degree of virtual portrait model A and virtual portrait Model B and in virtual portrait model
Matching result is marked on A, second display screen is for switching display virtual portrait model A and virtual portrait Model B.
In above scheme, the movement matching module is sentenced by comparing the relative position of the corresponding node of two dummy models
It is fixed whether to match, error amount is calculated by Δ 2=((a-b)-(c-d)) 2, wherein for any one bone node, θ 1 indicates instruction
Some artis of white silk person, θ 2 indicate node corresponding with θ 1 on the model of coach.Vector a indicates coordinate origin is to bone knot
The vector of point θ 1;For vector c indicates coordinate origin to the vector of bone node θ 2, b and d respectively indicate trainer's model and coach
The root node of model, Δ indicate the similarity of two nodes.
In above scheme, mobile terminal downloading installation APP client, the APP client is equipped with broadcast button, pause is pressed
Button and exit button, for sending the instruction for playing, suspending or exiting.
A kind of general limbs training method based on Kinect, includes the following steps:
S1, coach record at end the standard operation of training limbs and are fabricated to motion file and are sent to server;
S2, training end downloading motion file is simultaneously parsed into standard operation broadcasting, while acquiring the training action of trainer, than
Matching degree to training action and standard operation simultaneously exports matching result.
In above scheme, the step S1 is specifically included:
S11, the first Kinect sensor are opened, and coach enters capture area, and the first Kinect sensor identifies and to capture
The limb action for capturing coach is simultaneously tied to virtual portrait model by coach's trace trap in area;
S12, first display screen display virtual portrait model, when the limb action of coach be mapped to virtual portrait model and
When movement is consistent, coach, which sends, starts record command, and pop-up file saves frame;
S13, coach's import file name start to record the standard operation of the training limbs of coach's demonstration in capture area after saving;
S14, sending after demonstration terminates record command and saves;
The standard operation of recording is converted into motion file and is uploaded to server by S15.
In above scheme, the step S2 is specifically included:
S21, the second Kinect sensor are opened, and trainer enters capture area, the second Kinect sensor identification and to catching
It catches trainer's trace trap in area and limb action will be captured and be tied to virtual portrait Model B;
S22, show screen display virtual portrait Model B, when the limb action of trainer be mapped to virtual portrait Model B and
When movement is consistent, trainer sends selection motion file order, pops up select file dialog box;
S23, after text selecting, trainer, which sends, starts play instruction, and second display screen broadcasting is mapped in visual human
Standard operation on object model A, while recording the training action of trainer;
S24, trainer enter movement matching link, the key node of virtual portrait model A after sending pause play instruction
Display matching label, red indicate node matching failure, and green indicates successful match;
S25, trainer, which sends, to continue to instruct, and repeats step S23 and S24 and exits broadcasting shape after movement finishes
State.
In above scheme, the limb action for capturing coach is tied to virtual portrait model in the step S11, specifically
To capture human body image by Kinect sensor and extracting the skeletal architecture of personage, obtain the coordinate of each skeleton node simultaneously
Coordinate of the figure picture for Kinect sensor is converted into coordinate of the virtual portrait relative to screen, completes virtual portrait and true
The synchronization of real character joint point, then the Avatar skeletal system provided by Unity construct mobile skeleton model according to artis,
Virtual portrait is completed according to the action movement of real person, and by mending an animation to different artis position in the unit time
Complete the animation effect of synchronous real person's movement.
In above scheme, acted in the step S24 matched method particularly includes: by comparing virtual portrait model A and
Whether the posture of training of judgement is come in the relative position of the corresponding node of virtual portrait Model B person matches with standard gestures, using equation
Δ 2=((a-b)-(c-d)) 2 calculates error amount, wherein for any one bone node, θ 1 indicates some pass of trainer
Node, θ 2 indicate node corresponding with θ 1 on the virtual portrait model of coach's mapping;Vector a indicates coordinate origin is to bone
The vector of node θ 1, the vector of vector c indicates coordinate origin to bone node θ 2, b and d respectively indicate the virtual of trainer's mapping
The root node of person model and the virtual portrait model of coach's mapping, Δ indicates the similarity of two nodes, when error amount exists
In the threshold range of setting, then it represents that successful match.
General limbs training system and method based on Kinect of the invention, by newly-increased coach end, flexibly processing is more
The movement of kind different field and type need to only contact coach and record when wanting the training action of practice without trainer in system
System, coach uploads onto the server after recording movement for trainer's downloading use, and does not need to pass through software developer
Do secondary development as other training actions, greatly simplify expand process, shorten expansion time, reduce exploitation at
This.
Detailed description of the invention
Fig. 1 is the structural block diagram based on the general limbs training system based on Kinect in one embodiment of the invention;
Fig. 2 is the flow chart based on the general limbs training method based on Kinect in one embodiment of the invention.
Specific embodiment
Technical solution of the present invention is described in further detail with reference to the accompanying drawings and examples.
As shown in Figure 1, a kind of general limbs training system based on Kinect, including coach end, server, training end and
Mobile terminal, coach end connect training end by server, and coach end acts for recording and action message is converted to data
Text is uploaded to server;Training end obtain trainer movement and compare itself movement and from server downloading data text simultaneously
It is parsed into the matching result of corresponding movement;Mobile terminal connects training end, the playback progress for the movement of controlled training end.
Wherein, coach end include the first Kinect sensor, action database, input module and the first display screen, first
Kinect sensor and input module connecting moves database, the first display screen the first Kinect sensor of connection, first
Kinect sensor is used to acquire the limb action of personage and binds virtual portrait model, and action database is for storing Kinect
The limb action of sensor recording is simultaneously converted into data text, and input module is used to record the data text to match with limb action
Part name and action description, the first display screen is for showing the virtual portrait model bound.
When coach enters capture region, automatic identification is entered capture area to personage by system, when system identification to religion
The limbs of personage are captured and tracked after practicing member, are mapped to the limb action captured by the first Kinect sensor
Virtual person model.System interface will show a virtual person model, the movement of the model in the state that is captured
Coach remain exactly the same, illustrate user and model " binding " successfully at this time.
After user and model binding success, coach can send recording instruction to system.System receives recording instruction
Prompt coach inputs the motion file title to be saved afterwards, after setting completed open record mode.After logging mode is opened, religion
Practice member and need to demonstrate a set of movement for needing trainer to practice in capture area, is sent after demonstration and stop recording instruction, be
System, which will stop recording and the movement recorded just now is saved as file and encloses corresponding action description, to be recorded.Coach is again
By screening, suitable motion file is uploaded onto the server and is saved and is shared.
Training end includes data analysis module, the second Kinect sensor, movement matching module and second display screen, data
Analysis module connects server, and movement matching module is separately connected data analysis module and the second Kinect sensor, and second is aobvious
Display screen is separately connected data analysis module, movement matching module and the second Kinect sensor.Data analysis module will be for that will count
It is action data reconvert at acting and being mapped in virtual portrait model A according to text resolution, the second Kinect sensor is for adopting
Collect trainer action message simultaneously be mapped in virtual portrait Model B, movement Matching Model for determine virtual portrait model A and
The matching degree of virtual portrait Model B simultaneously marks matching result on virtual portrait model A, and second display screen is for switching display
Virtual portrait model A and virtual portrait Model B.
The training end that trainer opens system selects motion file, if motion file is chosen successfully and file is effective, instruction
White silk person is sent by mobile terminal starts play instruction, and the virtual portrait model A of training end interface playback standard operation works as training
Person sends pause instruction by mobile terminal, and training end interface will suspend the movement of virtual portrait model A and open the 2nd Kinec
Sensor trap mode is acquired by the 2nd Kinec sensor and is trained after trainer and virtual portrait Model B binding success
The action message of person is simultaneously reflected in virtual portrait Model B in real time, and matching virtual person model A and virtual portrait Model B
Movement.
After system enters matching status, system obtains each of virtual portrait model A and virtual portrait Model B for real-time
Coordinate of the bone node relative to itself person model, and the cosine for calculating each corresponding bone node relative coordinate is similar
Degree, if indicating Knot Searching success if, otherwise indicates that it fails to match whether in error amount.When all nodes all match
After success, the movement successful match is indicated.
A kind of general limbs training method based on Kinect, as shown in Fig. 2, including the following steps:
S1, coach records at end the standard operation of training limbs and is fabricated to motion file and is sent to server, specific to wrap
Include following steps:
S11, the first Kinect sensor are opened, and coach enters capture area, and the first Kinect sensor identifies and to capture
The limb action for capturing coach is simultaneously tied to virtual portrait model by coach's trace trap in area;
S12, first display screen display virtual portrait model, when the limb action of coach be mapped to virtual portrait model and
When movement is consistent, coach, which sends, starts record command, and pop-up file saves frame;
S13, coach's import file name start to record the standard operation of the training limbs of coach's demonstration in capture area after saving;
S14, sending after demonstration terminates record command and saves;
The standard operation of recording is converted into motion file and is uploaded to server by S15.
Wherein, the limb action for capturing coach is tied to virtual portrait model in step S11, specifically by first
Kinect sensor captures human body image and extracts the skeletal architecture of personage, obtains the coordinate of each skeleton node and by figure picture
Coordinate of the virtual portrait relative to screen is converted into for the coordinate of Kinect sensor, realizes that virtual portrait and real person close
The synchronization of node, then the Avatar skeletal system provided by Unity construct mobile skeleton model according to artis, realize virtual
Personage is realized by mending an animation to different artis position in the unit time and is synchronized according to the action movement of real person
The animation effect of real person's movement.
S2, training end downloading motion file is simultaneously parsed into standard operation broadcasting, while acquiring the training action of trainer, than
Matching degree to training action and standard operation simultaneously exports matching result, specifically comprises the following steps:
S21, the second Kinect sensor are opened, and trainer enters capture area, the second Kinect sensor identification and to catching
It catches trainer's trace trap in area and limb action will be captured and be tied to virtual portrait Model B;
S22, show screen display virtual portrait Model B, when the limb action of trainer be mapped to virtual portrait Model B and
When movement is consistent, trainer sends selection motion file order, pops up select file dialog box;
S23, after text selecting, trainer, which sends, starts play instruction, and display screen broadcasting is mapped in virtual portrait mould
Standard operation on type A, while recording the training action of trainer;
S24, trainer enter comparison link after sending pause play instruction, and the key node of virtual portrait model A is shown
Label is matched, red indicates node matching failure, and green indicates successful match;
S25, trainer send the step of continuing to instruct, repeating step S23 and S24, after movement finishes, exit and broadcast
Put state.
It is acted in step S24 matched method particularly includes: corresponding by comparing virtual portrait model and virtual portrait model A
Whether the posture of training of judgement is come in the relative position of node person matches with standard gestures, using equation Δ 2=((a-b)-(c-d))
2 calculate error amount, wherein for any one bone node, θ 1 indicates some artis of trainer, and θ 2 indicates that coach is reflected
Penetrate the artis corresponding with θ 1 on virtual portrait model;Vector of the vector a indicates coordinate origin to bone node θ 1, vector
For c indicates coordinate origin to the vector of bone node θ 2, b and d respectively indicate the root node of trainer's model and coach's model, Δ
The similarity for indicating two nodes, when error amount is in the threshold range of setting, then it represents that successful match.
In view of trainer needs also broadcast by mouse or Keyboard Control in training far from computer in training
State is put, system includes mobile terminal, such as mobile phone, and mobile phone is mainly responsible for control action by being wirelessly attached with training end
Broadcasting and pause when broadcasting exist as the controller for training end, after successful connection, are remotely controlled by the button in APP
Make the broadcast state of training end animation.
S3 trainer opens APP, inputs service IP address, display control interface after successful connection;
S31, which is clicked, starts broadcast button, remote port playback action;
S32 clicks pause button, remote port pause movement;
S33, which is clicked, to be stopped, remote port stopping movement;
S34 click is exited, and cell phone application is exited.
General limbs training system and method based on Kinect of the invention, by newly-increased coach end, flexibly processing is more
The movement of kind different field and type need to only contact coach and record when wanting the training action of practice without trainer in system
System, coach uploads onto the server after recording movement for trainer's downloading use, and does not need to pass through software developer
Secondary development is done for other training actions, greatly simplifies and expands process, shortens expansion time, reduce development cost.
Above-described specific embodiment has carried out further the purpose of the present invention, technical scheme and beneficial effects
It is described in detail, it should be understood that the foregoing is merely a specific embodiment of the invention, the guarantor that is not intended to limit the present invention
Range is protected, all within the spirits and principles of the present invention, any modification, equivalent substitution, improvement and etc. done should all be contained in this hair
Within bright protection scope.
Claims (10)
1. a kind of general limbs training system based on Kinect, it is characterised in that: including coach end, server, training end and
Mobile terminal, the coach end connect training end by server, and the coach end acts for recording and turns action message
Chemical conversion data text is uploaded to server;The trained end obtains the movement of trainer and compares itself and acts and under server
It carries data text and is parsed into the matching result of corresponding movement;The mobile terminal connects training end, is used for controlled training
The playback progress of end movement.
2. the general limbs training system of Kinect according to claim 1, it is characterised in that: the coach end includes the
One Kinect sensor, action database, input module and the first display screen, first Kinect sensor and input module
Connecting moves database, the first display screen connect the first Kinect sensor, and first Kinect sensor is for acquiring people
The limb action of object simultaneously binds virtual portrait model, and action database is used to store the limb action of Kinect sensor recording simultaneously
It is converted into data text, input module is used to record the Data Filename and action description to match with limb action, and first is aobvious
Display screen is for showing virtual portrait model.
3. the general limbs training system of Kinect according to claim 1, it is characterised in that: the trained end includes number
According to analysis module, the second Kinect sensor, movement matching module and second display screen, the data analysis module connection service
Device, movement matching module are separately connected data analysis module and the second Kinect sensor, and second display screen is separately connected data
Analysis module, movement matching module and the second Kinect sensor, the data analysis module are used to download from server
Data text resolves to action data reconvert at acting and being mapped in virtual portrait model A, and the second Kinect sensor is used for
It acquires the action message of trainer and is mapped in virtual portrait Model B, movement Matching Model is for determining virtual portrait model A
Matching result is marked with the matching degree of virtual portrait Model B and on virtual portrait model A, second display screen is aobvious for switching
Show virtual portrait model A and virtual portrait Model B.
4. the general limbs training system of Kinect according to claim 3, it is characterised in that: the movement matching module
Determine whether to match by comparing the relative position of the corresponding node of two dummy models, by Δ2=((a-b)-(c-d))2It calculates
Error amount, wherein for any one bone node, θ 1 indicates some artis of trainer, and θ 2 is indicated on the model of coach
Node corresponding with θ 1.Vector of the vector a indicates coordinate origin to bone node θ 1;Vector c indicates coordinate origin is to bone
The vector of node θ 2, b and d respectively indicate the root node of trainer's model and coach's model, and Δ indicates the similar of two nodes
Degree.
5. the general limbs training system of Kinect according to claim 1, it is characterised in that: the mobile terminal downloading
APP client is installed, the APP client is equipped with broadcast button, pause button and exit button, for sending broadcasting, pause
Or the instruction exited.
6. a kind of general limbs training method based on Kinect, which comprises the steps of:
S1, coach record at end the standard operation of training limbs and are fabricated to motion file and are sent to server;
S2, training end downloading motion file is simultaneously parsed into standard operation broadcasting, while acquiring the training action of trainer, compares instruction
Practice the matching degree of movement and standard operation and exports matching result.
7. the general limbs training method based on Kinect according to claim 6, which is characterized in that the step S1 is specific
Include:
S11, the first Kinect sensor are opened, and coach enters capture area, and the first Kinect sensor identifies and in capture area
Coach's trace trap and the limb action for capturing coach is tied to virtual portrait model;
S12, the first display screen display virtual portrait model, when the limb action of coach is mapped to virtual portrait model and movement
When being consistent, coach, which sends, starts record command, and pop-up file saves frame;
S13, coach's import file name start to record the standard operation of the training limbs of coach's demonstration in capture area after saving;
S14, sending after demonstration terminates record command and saves;
The standard operation of recording is converted into motion file and is uploaded to server by S15.
8. the general limbs training method based on Kinect according to claim 6, which is characterized in that the step S2 is specific
Include:
S21, the second Kinect sensor are opened, and trainer enters capture area, and the second Kinect sensor identifies and to capture area
Interior trainer's trace trap simultaneously will capture limb action and be tied to virtual portrait Model B;
S22 shows screen display virtual portrait Model B, when the limb action of trainer is mapped to virtual portrait Model B and movement
When being consistent, trainer sends selection motion file order, pops up select file dialog box;
S23, after text selecting, trainer, which sends, starts play instruction, and second display screen broadcasting is mapped in virtual portrait mould
Standard operation on type A, while recording the training action of trainer;
S24, trainer enter movement matching link after sending pause play instruction, and the key node of virtual portrait model A is shown
Label is matched, red indicates node matching failure, and green indicates successful match;
S25, trainer, which sends, to continue to instruct, and repeats step S23 and S24 and exits broadcast state after movement finishes.
9. the general limbs training method based on Kinect according to claim 7, which is characterized in that in the step S11
The limb action for capturing coach is tied to virtual portrait model, specifically, capturing human body image by Kinect sensor
And the skeletal architecture of personage is extracted, the coordinate of each skeleton node is obtained and turns coordinate of the figure picture for Kinect sensor
It changes coordinate of the virtual portrait relative to screen into, it is synchronous with real person's artis to complete virtual portrait, then mention by Unity
The Avatar skeletal system of confession constructs mobile skeleton model according to artis, completes virtual portrait and is transported according to the movement of real person
It is dynamic, and by mending the animation effect that an animation completes synchronous real person's movement to different artis position in the unit time.
10. the general limbs training method based on Kinect according to claim 8, which is characterized in that in the step S24
It acts matched method particularly includes: come by comparing the relative position of virtual portrait model A and the corresponding node of virtual portrait Model B
Whether the posture of training of judgement person matches with standard gestures, using equation Δ2=((a-b)-(c-d))2Calculate error amount, wherein
For any one bone node, θ 1 indicates some artis of trainer, and θ 2 indicates the virtual portrait model of coach's mapping
Upper node corresponding with θ 1;Vector of the vector a indicates coordinate origin to bone node θ 1, vector c indicates coordinate origin to bone
The vector of bone node θ 2, b and d respectively indicate the virtual portrait model of trainer's mapping and the virtual portrait model of coach's mapping
Root node, Δ indicate two nodes similarity, when error amount is in the threshold range of setting, then it represents that successful match.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810913701.2A CN109675264A (en) | 2018-08-13 | 2018-08-13 | A kind of general limbs training system and method based on Kinect |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810913701.2A CN109675264A (en) | 2018-08-13 | 2018-08-13 | A kind of general limbs training system and method based on Kinect |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109675264A true CN109675264A (en) | 2019-04-26 |
Family
ID=66184464
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810913701.2A Pending CN109675264A (en) | 2018-08-13 | 2018-08-13 | A kind of general limbs training system and method based on Kinect |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109675264A (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112138342A (en) * | 2020-09-28 | 2020-12-29 | 深圳市艾利特医疗科技有限公司 | Balance ability auxiliary training system, method and device based on virtual reality |
CN113747951A (en) * | 2019-10-30 | 2021-12-03 | 路德斯材料有限公司 | Performance assessment apparatus, system and related method |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130204408A1 (en) * | 2012-02-06 | 2013-08-08 | Honeywell International Inc. | System for controlling home automation system using body movements |
CN103706106A (en) * | 2013-12-30 | 2014-04-09 | 南京大学 | Self-adaption continuous motion training method based on Kinect |
CN106097787A (en) * | 2016-08-18 | 2016-11-09 | 四川以太原力科技有限公司 | Limbs teaching method based on virtual reality and teaching system |
CN106178476A (en) * | 2016-08-13 | 2016-12-07 | 泉州医学高等专科学校 | A kind of numeral volleyball training system |
CN106448295A (en) * | 2016-10-20 | 2017-02-22 | 泉州市开拓者智能科技有限公司 | Remote teaching system and method based on capturing |
CN107293175A (en) * | 2017-08-04 | 2017-10-24 | 华中科技大学 | A kind of locomotive hand signal operation training method based on body-sensing technology |
CN107754225A (en) * | 2017-11-01 | 2018-03-06 | 河海大学常州校区 | A kind of intelligent body-building coaching system |
-
2018
- 2018-08-13 CN CN201810913701.2A patent/CN109675264A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130204408A1 (en) * | 2012-02-06 | 2013-08-08 | Honeywell International Inc. | System for controlling home automation system using body movements |
CN103706106A (en) * | 2013-12-30 | 2014-04-09 | 南京大学 | Self-adaption continuous motion training method based on Kinect |
CN106178476A (en) * | 2016-08-13 | 2016-12-07 | 泉州医学高等专科学校 | A kind of numeral volleyball training system |
CN106097787A (en) * | 2016-08-18 | 2016-11-09 | 四川以太原力科技有限公司 | Limbs teaching method based on virtual reality and teaching system |
CN106448295A (en) * | 2016-10-20 | 2017-02-22 | 泉州市开拓者智能科技有限公司 | Remote teaching system and method based on capturing |
CN107293175A (en) * | 2017-08-04 | 2017-10-24 | 华中科技大学 | A kind of locomotive hand signal operation training method based on body-sensing technology |
CN107754225A (en) * | 2017-11-01 | 2018-03-06 | 河海大学常州校区 | A kind of intelligent body-building coaching system |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113747951A (en) * | 2019-10-30 | 2021-12-03 | 路德斯材料有限公司 | Performance assessment apparatus, system and related method |
CN112138342A (en) * | 2020-09-28 | 2020-12-29 | 深圳市艾利特医疗科技有限公司 | Balance ability auxiliary training system, method and device based on virtual reality |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107977834B (en) | Data object interaction method and device in virtual reality/augmented reality space environment | |
CN104866101B (en) | The real-time interactive control method and device of virtual objects | |
CN108470485B (en) | Scene-based training method and device, computer equipment and storage medium | |
US20160088286A1 (en) | Method and system for an automatic sensing, analysis, composition and direction of a 3d space, scene, object, and equipment | |
CN105608005B (en) | The test method and system of a kind of television system | |
JPH10134028A (en) | Method and device for remote learning using internet | |
CN109675264A (en) | A kind of general limbs training system and method based on Kinect | |
KR20170129716A (en) | A structure, apparatus and method for providing bi-directional functional training content including provision of adaptive training programs based on performance sensor data | |
CN108335747A (en) | Cognitive training system | |
CN110472099A (en) | Interdynamic video generation method and device, storage medium | |
US10942968B2 (en) | Frameworks, devices and methodologies configured to enable automated categorisation and/or searching of media data based on user performance attributes derived from performance sensor units | |
CN109101879A (en) | A kind of the posture interactive system and implementation method of VR teaching in VR classroom | |
WO2024027661A1 (en) | Digital human driving method and apparatus, device and storage medium | |
WO2016187673A1 (en) | Frameworks, devices and methodologies configured to enable gamification via sensor-based monitoring of physically performed skills, including location-specific gamification | |
CN105405081A (en) | Continued learning providing system through recording learning progress and method thereof | |
CN106598865B (en) | Software testing method and device | |
JP7078577B2 (en) | Operational similarity evaluation device, method and program | |
CN111223549A (en) | Mobile end system and method for disease prevention based on posture correction | |
CN114513694A (en) | Scoring determination method and device, electronic equipment and storage medium | |
JP6999543B2 (en) | Interactive Skills Frameworks and methods configured to enable analysis of physically performed skills, including application to distribution of training content. | |
CN106355437A (en) | Targeted advertising through multi-screen display | |
US20220223067A1 (en) | System and methods for learning and training using cognitive linguistic coding in a virtual reality environment | |
CN109062654A (en) | A kind of interaction type learning method and system | |
CN112218111A (en) | Image display method and device, storage medium and electronic equipment | |
KR20200098970A (en) | Smart -learning device and method based on motion recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190426 |