CN107243141A - A kind of action auxiliary training system based on motion identification - Google Patents
A kind of action auxiliary training system based on motion identification Download PDFInfo
- Publication number
- CN107243141A CN107243141A CN201710313765.4A CN201710313765A CN107243141A CN 107243141 A CN107243141 A CN 107243141A CN 201710313765 A CN201710313765 A CN 201710313765A CN 107243141 A CN107243141 A CN 107243141A
- Authority
- CN
- China
- Prior art keywords
- action
- training
- data
- unit
- joint angles
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B24/00—Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
- A63B24/0003—Analysing the course of a movement or motion sequences during an exercise or trainings sequence, e.g. swing for golf or tennis
- A63B24/0006—Computerised comparison for qualitative assessment of motion sequences or the course of a movement
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
- G06F18/2411—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on the proximity to a decision surface, e.g. support vector machines
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/03—Arrangements for converting the position or the displacement of a member into a coded form
- G06F3/033—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor
- G06F3/0346—Pointing devices displaced or positioned by the user, e.g. mice, trackballs, pens or joysticks; Accessories therefor with detection of the device orientation or free movement in a 3D space, e.g. 3D mice, 6-DOF [six degrees of freedom] pointers using gyroscopes, accelerometers or tilt-sensors
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B24/00—Electric or electronic controls for exercising apparatus of preceding groups; Controlling or monitoring of exercises, sportive games, training or athletic performances
- A63B24/0003—Analysing the course of a movement or motion sequences during an exercise or trainings sequence, e.g. swing for golf or tennis
- A63B24/0006—Computerised comparison for qualitative assessment of motion sequences or the course of a movement
- A63B2024/0012—Comparing movements or motion sequences with a registered reference
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B2220/00—Measuring of physical parameters relating to sporting activity
- A63B2220/05—Image processing for measuring physical parameters
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B2220/00—Measuring of physical parameters relating to sporting activity
- A63B2220/30—Speed
- A63B2220/34—Angular speed
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B2220/00—Measuring of physical parameters relating to sporting activity
- A63B2220/40—Acceleration
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B2220/00—Measuring of physical parameters relating to sporting activity
- A63B2220/80—Special sensors, transducers or devices therefor
- A63B2220/83—Special sensors, transducers or devices therefor characterised by the position of the sensor
- A63B2220/836—Sensors arranged on the body of the user
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B2225/00—Miscellaneous features of sport apparatus, devices or equipment
- A63B2225/20—Miscellaneous features of sport apparatus, devices or equipment with means for remote communication, e.g. internet or the like
-
- A—HUMAN NECESSITIES
- A63—SPORTS; GAMES; AMUSEMENTS
- A63B—APPARATUS FOR PHYSICAL TRAINING, GYMNASTICS, SWIMMING, CLIMBING, OR FENCING; BALL GAMES; TRAINING EQUIPMENT
- A63B2230/00—Measuring physiological parameters of the user
- A63B2230/62—Measuring physiological parameters of the user posture
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- Physical Education & Sports Medicine (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
Abstract
The present invention provides a kind of action auxiliary training system based on motion identification, including action acquisition module and action supplemental training module;The action acquisition module, be trained to user for obtaining needs the limbs posture inertial guidance data of training action, and is sent to the action supplemental training module;The action supplemental training module, is demonstrated for based on the limbs posture inertial guidance data, carrying out three-dimensional reduce, and strong inclined training is assessed and act carrying out the laggard action of human action identification;The present invention carries out three-dimensional reduction demonstration, action recognition and exact evaluation to training action based on human motion recognition method, so as to rectify training partially;Contribute to where the objective stroke defect inferior position for understanding oneself of user, the various problems run into learning training are fundamentally solved, so as to carry out more targeted training and action correction, to improve training effect.
Description
Technical field
The present invention relates to fields such as man-machine interaction, machine learning, virtual reality, robot learning and education and trainings, more
Body, it is related to a kind of action auxiliary training system based on motion identification.
Background technology
With the development and the rise of machine learning techniques of computer technology, blowout is also presented in motion identification technology
Development, motion identification technology has very big application value in supplemental training field.On the one hand it can be used for sports and dancing etc.
Field, is analyzed professional technique action, is assessed and supplemental training.On the other hand, continuing to develop with robot technology,
Also the recruitment evaluation and supplemental training means that can learn as robot behavior;
At present, it is demonstration-drill method to train various technical movements most handy method, and this method is proved to effective.But
It is during trainee in this way is made, the demonstration that Student Training acts to be trained and assesses particularly significant, according to
Assessment result understands student to the Grasping level of training action and the action of deviation easily occurs, then is targetedly demonstrated again
Practise again.
Naked eyes and experience only according to coach in traditional training are estimated to training action, it is impossible to comprehensive, thin
Cause, accurate, assessment of the completion of system to training action.No matter physical culture, dancing or robot behavior learn, various to people
Or in the field that requires of the movement posture of robot, cause each movement posture to reach essence only in learning process
Really, specification, the perfect level be strict with, can be only achieved exquisiteness;Conventional method is very strong for the dependence of coach simultaneously, limit
Having made student can be with the when and where of training action.
The content of the invention
The present invention provide it is a kind of overcome above mentioned problem or solve the above problems at least in part based on motion identification
Act auxiliary training system.
There is provided a kind of action auxiliary training system based on motion identification, including action according to an aspect of the present invention
Acquisition module and action supplemental training module;
The action acquisition module, the limbs posture inertial guidance data of the training action for obtaining user, and it is sent to institute
State action supplemental training module;
The action supplemental training module, for based on the limbs posture inertial guidance data, carrying out three-dimensional reduction demonstration,
Carry out the laggard action of human action identification and assess and act strong training partially.
The present invention proposes a kind of action auxiliary training system based on motion identification, and using acting, acquisition module progress is real-time
Action collection, carries out three-dimensional reduction demonstration;It is identified based on human motion recognition method, and carries out the assessment to training action
Trained partially with rectifying.Whether technical movements trainer can also be by understanding the training method of oneself to the assessment of training action
It is proper, and then training method is improved with a definite target in view, improve training quality;Training action collection contributes to user is objective to be known from
Where oneself wrong inferior position, the various problems run into learning training are fundamentally solved, it is more targeted so as to carry out
Training and action correction, to improve training effect.
Brief description of the drawings
Fig. 1 is a kind of action auxiliary training system schematic diagram based on motion identification of the embodiment of the present invention.
Embodiment
With reference to the accompanying drawings and examples, the embodiment to the present invention is described in further detail.Implement below
Example is used to illustrate the present invention, but is not limited to the scope of the present invention.
As shown in figure 1, a kind of action auxiliary training system based on motion identification, including act acquisition module and act auxiliary
Help training module;
The action acquisition module, the limbs posture inertial guidance data of the training action for obtaining user, and it is sent to institute
State action supplemental training module;
The action supplemental training module, for based on the limbs posture inertial guidance data, carrying out three-dimensional reduction demonstration,
Carry out the laggard action of human action identification and assess and act strong training partially.
The present embodiment carries out real-time action collection using acquisition module is acted, and carries out three-dimensional demonstration;Based on human action
Recognition methods is identified, then is trained the assessment of action and rectifys training partially.Acting trainer can also be by training
Whether the assessment of action is proper to understand the training method of oneself, and then improves training method with a definite target in view, improves training matter
Amount;Training action collection contributes to where the objective wrong inferior position for understanding oneself of user, fundamentally solves to meet in learning training
The various problems arrived, so as to carry out more targeted training and action correction, to improve training effect.
As an optional embodiment, the action acquisition module includes collecting unit, main control unit and the first WIFI
Unit;
The collecting unit, including the wearable device of multiple body parts of user is worn on, wherein can described in single
Wearable device includes:Nine axle inertial navigation sensors, the first processor of single chip computer and the first NRF communication modules, for obtaining user's
The limbs posture inertial guidance data of training action, and it is sent to the main control unit;
The main control unit, including second singlechip processor and the 2nd NRF communication modules and with the multiple limbs
The corresponding multiple indicator lamps of the wearable device at position, for by the limb of the wearable device collection of the multiple body part
Body posture inertial guidance data is sent to the first WIFI units after being integrated;
The first WIFI units, for the limbs posture inertial guidance data to be transferred into the action by WIFI modes
Supplemental training module.
Wearable setting of the present embodiment by being worn on human body each different body part realizes that real-time action is gathered, and
Data transfer is carried out by wireless WIFI technology.Action acquisition technique is that record human body action message is new for what is analyzed and play back
Type human-computer interaction technology.Action collection classification have mechanical action collection, acoustics formula action collection, electromagnetic action collection,
Optical profile type action collection, inertial navigation action collection.The data of collection both can simply arrive the locus of record limbs, can also answer
The miscellaneous careful action to record face and muscle group.In physical culture, dancing and robot behavior study etc. in field, all to act appearance
State is core manifestation form, so necessary to the collection for being trained to user action.The information of collection is to action training teacher
Important meaning is suffered from user.
In the present embodiment, the main control unit can also include:One red LED indicator lamp, switch, one block of electricity
Pond.
In the present embodiment, multiple indicator lamps that the wearable device of the multiple body part is corresponding are multiple bluenesss
LED light, is respectively used to indicate the wireless connection conditions of its corresponding wearable device, it is normal that blue lamp lights expression connection;1
Individual red LED indicator lamp indicates whether main control unit is in running order;
In the present embodiment, the collecting unit also includes:One piece of battery, switch, LED light, the LED
Indicator lamp is used to indicate whether collecting unit is in running order.
As an optional embodiment, the action supplemental training module includes standard exercise action demonstration unit, moved
Make supplemental training unit and the 2nd WIFI units;
The standard exercise acts demonstration unit, for measured training action joint angles sequence information, driving
Three-dimensional cartoon manikin demonstrates out the training action of standard;
The action supplemental training unit, for resolving the limbs posture inertial guidance data, obtains limbs posture joint angle
Degrees of data, carries out threedimensional model demonstration;And recognize the limbs posture inertial guidance data, and with the standard exercise of cloud database
Action data sequence is contrasted, and obtains the entry evaluation result for training action;And calculate the limbs posture joint
The difference of the joint angles data of angle-data and the standard exercise action data of cloud database, obtains the training action
Joint angles deviation information;And according to the entry evaluation result and the joint angles deviation information, obtain final assess
As a result, instructed partially with rectify;
The 2nd WIFI units, for being entered by WIFI modes with the action acquisition module and the cloud database
Row communication.
In the present embodiment, the standard exercise action sequence of obtained current training action sequence and cloud database will be recognized
Contrasted, judge out standard or do not mark action, be used as the entry evaluation result of training action.
In the present embodiment, the action supplemental training module provides two kinds of training modes, including training action demo mode
With action supplemental training pattern.
Demonstration unit is acted by the standard exercise and realizes training action demo mode.Mould is demonstrated in the training action
In formula, three-dimensional cartoon manikin is driven using the training action joint angles sequence data of standard, the training of standard is demonstrated out
Action, student can be with the comprehensive observation threedimensional model of 360 deg, the training action of comprehensive careful viewing standard.
Supplemental training pattern is acted by the action supplemental training unit realization.The action supplemental training pattern bag
Include:To the segmental training pattern and one section of whole section of training mode acted of one section of action.One complete training flow includes three
Individual part:
Part I:Pass through the training action part to be trained of standard exercise action demonstration.
Part II:User's Imitation Exercise is trained to, and reduction demonstration is carried out by threedimensional model.
Part III:The training action of user's Imitation Exercise is rectified a deviation and strengthened exercises.
Wherein, Part I is identical with standard exercise action demonstration unit;Part II, it is used to the limbs posture
Derivative obtains user's training action joint angles sequence information according to fusion resolving is carried out, and carries out threedimensional model reduction demonstration;3rd
Part, by recognizing the limbs posture inertial guidance data, obtains the elemental motion sequence to be trained of user, and the training with standard is moved
Make alignment, judge out the standard or nonstandard action of user.Closed again for each limbs of the non-standard training action of user
Angle is saved, each limbs joint angle of standard operation corresponding with the nonstandard action is contrasted, and obtains accurate joint
Angular deviation information, so as to be estimated, inclined and strengthened exercises are rectified using assessment result as according to progress.
As an optional embodiment, a kind of action auxiliary training system also includes cloud database, the cloud
Client database includes standard exercise action database and User Information Database;
The standard exercise action database, the standard operation data sequence for storing training action, and the mark
Each joint angles data of quasi- action data sequence;
The User Information Database, for obtaining user to different training actions from the action supplemental training module
Learning state, exercise number of times and assessment result, and store.
In the present embodiment, the cloud database provides the data of standard exercise action for the action supplemental training module
Support, store the standard operation data sequence of training action and each joint angles data of the standard operation data sequence.
It is additionally operable to store the relevant information of user's all action training since registration, including user the grasp to each training action
Degree, learning state, the exercise user profile such as number of times and assessment result, the process and effect of the understanding action training that can continue
Really.
The cloud database has two-way interactive with the action supplemental training module, on the one hand, the action auxiliary
Training module needs to obtain the standard operation data sequence of the cloud database, to be identified and accurately assess;The opposing party
Face, the cloud database needs to obtain the data that user is trained by the action supplemental training module every time.
The user profile and the standard operation data sequence of the cloud database storage are to constantly update.
As an optional embodiment, the standard exercise action demonstration unit further comprises:First threedimensional model
Drawing unit and demonstration unit;
The first threedimensional model drawing unit, for the joint angles data according to standard exercise action data, sets up
Human 3d model, and drive the human 3d model to move in three dimensions;
The demonstration unit, for the joint angles data using the standard exercise action data, drives the human body
The motion of threedimensional model in three dimensions, to demonstrate the training action of standard.
In the present embodiment, the standard exercise action demonstration unit further comprises:Local data base, threedimensional model are drawn
Unit and threedimensional model demonstration unit;The local data base is used to store standard exercise action data, and the threedimensional model is painted
Unit processed carries out three-dimensional demonstration according to standard exercise action data, is presented by the demonstration unit.
As an optional embodiment, the action supplemental training unit further comprises:Act pretreatment unit, move
Make recognition unit, the second threedimensional model drawing unit and evaluation unit;
The action pretreatment unit, for being filtered denoising, normalized to the limbs posture inertial guidance data
And segment processing, obtain some sections of action datas;
The action recognition unit, for some sections of action datas extract characteristic vector, by train two
Fork Tree Classifier network is identified, and obtains entry evaluation result;
The second threedimensional model drawing unit, for calculating each limb according to the limbs posture inertial guidance data after identification
The joint angles data sequence of body, joint angles information frame is built using the joint angles data sequence;Pass through the joint
Angle information frame drives the human 3d model to move in three dimensions;
The evaluation unit, for calculating the joint angles of each limbs and the standard exercise action data of cloud database
Joint angles data difference, obtain the joint angles deviation information of the training action;According to the joint angles deviation
Information provides best-evaluated result with the entry evaluation result.
In the present embodiment, the work(that the second threedimensional model drawing unit is realized with the first threedimensional model drawing unit
Can be identical, simply enter data different.The first threedimensional model drawing unit enters according to the standard exercise action data of input
Row demonstration, and the second threedimensional model drawing unit is demonstrated according to the training action of user at that time, it is necessary to accurate solution
The joint angles data sequence of each limbs is calculated, joint angles information frame is built using the joint angles data sequence;It is logical
Crossing the joint angles information frame drives the human 3d model to move in three dimensions.
The in the action recognition unit and the second threedimensional model drawing unit realization action supplemental training pattern
Part III in two parts, the evaluation unit realization action supplemental training pattern.
In the present embodiment, the limbs posture inertial guidance data that the action pretreatment unit is sent to action acquisition module enters
After row pretreatment, the training action sequence of user is identified by the action recognition unit;The sequence and high in the clouds standard are instructed
Practice action sequence to be contrasted, obtain the entry evaluation result acted for Student Training;The training action for reusing student is closed
Section angle is compared with high in the clouds standard exercise action joint angles, obtains the joint angles deviation information of training action, afterwards
Entry evaluation result and joint angles deviation information are merged, final assessment result is obtained, student rectify and instructed partially.
In the present embodiment, the Binary tree classifier network using the training action of standard by training BT- in advance
SVM-NN binary tree mixed mode graders network is obtained.
The present embodiment is identified and classified to the training action information sequence of user using the pattern classifier network, is obtained
To the user's training action sequence information identified, for acting knowledge base Plays training action sequence with high in the clouds standard exercise
It is compared;Single grader unit is made up of SVM-NN hybrid classifers in pattern classifier network.
Carry out rectify partially instruct when, by using standard exercise action data drive human 3d model and use student
The method of the manikin Overlapping display of training action driving, allow student comprehensively, clearly understand and the training action of deviation occur
And the degree of deviation.Relief student repeat drill training action fragment devious until action assessment result reach that standard is
Only.
Specifically, in the collecting unit, the wearable device of the multiple body parts for being worn on user is 17
The wearable device of body part, 17 body parts include:
Head, left shoulder, right shoulder, left large arm, right large arm, left forearm, right forearm, left hand, the right hand, chest, waist, Zuo great
Leg, right thigh, left leg, right leg, left foot and right crus of diaphragm.
Specifically, the limbs posture inertial guidance data that the main control unit is integrated includes:
The x that the x that three axis accelerometer is measured, y, z 3-axis acceleration information, three-axis gyroscope are measured, y, z tri- axis angular rate
The x that information, three axle magnetometers are measured, y, z tri- axle Geomagnetism Information.
Specifically, the joint angles information frame includes:
Neck joint angle, left and right wrist joint angle, left and right Angle of Elbow Joint, right and left shoulders joint angles, waist joint angle,
Left and right hip joint angles, left and right knee joint angle and left and right ankle arthrosis angle.
As an optional embodiment, the action supplemental training module is equipped on an Android Intelligent mobile equipments
On.
In the present embodiment, the action supplemental training module is carried by Android intelligent platforms, it is flat using Android
Opengl ES 2.0API 3 D image drawing display interface technologies on platform, complete the drafting and display of cartoon manikin;Should
Cartoon character manikin is built using tree, and waist is the trunk node of tree, left shoulder, right shoulder, left hip, the right side
Hip, chest are the first level of child nodes, and left large arm, right large arm, left thigh, right thigh, neck are the of the first level of child nodes subordinate
Two level of child nodes, act connection relation, so that convenient use joint angle between each limbs joint of structure manikin by that analogy
Degree information frame carrys out driving model.
As an optional embodiment, the action recognition unit further comprises:
Build and training grader unit, for being filtered denoising, normalized using standard exercise action data
And segment processing, from the time domain of the data after processing, frequency domain and time-frequency domain extracting data first eigenvector, training described two
Pitch Tree Classifier network B T-SVM-NN, each network node of the Binary tree classifier network for one two classification support to
Amount machine and arest neighbors hybrid classifer SVM-NN;
Current recognition unit, for extracting second feature vector to some sections of action datas, by the second feature
The vector input Binary tree classifier network B T-SVM-NN is identified, and confirms as standard operation or nonstandard action.
Action recognition in the present embodiment is broadly divided into structure and training grader and identification two parts.The structure and instruction
Practice being embodied as grader unit:
Step 11, denoising and normalized are filtered to standard exercise action;
Step 12, the standard exercise action after processing is subjected to segment processing, obtains some sections of action datas, wherein
Each section of action data is a basic human action;
Step 13, based on each basic human action, respectively from time domain, frequency domain and the time-frequency of the basic human action
First eigenvector is extracted in numeric field data;
Step 14, based on the first eigenvector, build and train the Binary tree classifier network B T-SVM-NN,
Each non-leaf nodes of the Binary tree classifier network is one two SVMs classified and arest neighbors hybrid classification
Device SVM-NN.
Wherein, the Binary tree classifier network B T-SVM-NN is built described in step 4 includes:
Step 14.1, based on the relative distance between sample class, optimal incomplete binary tree structure is constructed;
Step 14.2, it is that each non-leaf nodes builds and trains corresponding two based on the incomplete binary tree structure
Classification SVM-NN mixed mode graders, all categories in each father node are divided into two child nodes, directly
It is leaf node to child nodes, and only comprising an elemental motion classification.
Wherein, the first eigenvector includes:
The characteristic value calculated from time domain data, including:Fallen into a trap from nine axle attitude datas of the basic human action
Obtain count and, peak-to-peak value, zero-crossing values, average, mean square deviation, energy, two between centers coefficient correlations, the degree of bias and kurtosis letter
Breath;
The characteristic value calculated from frequency domain data, including:Nine axle attitude signals of the basic human action are carried out
Fourier coefficient, energy spectral density and the frequency domain entropy obtained after Fourier transformation;
The characteristic value calculated from time-frequency numeric field data, including:Nine axle attitude signals of the basic human action are entered
Wavelet energy ratio on the different directions extracted after row wavelet transformation.
In another embodiment, the Binary tree classifier network B T-SVM-NN is built described in step 4 includes:
(1) each elemental motion section based on standardized human body's action data collection, from time domain, frequency domain and time-frequency domain tripartite face, is carried
One group of characteristic parameter of each elemental motion can be represented and distinguish by taking, and this feature parameter includes l characteristic quantity, to the l spies
The amount of levying is numbered, the first eigenvector of l each action sections of characteristic quantity composition of each elemental motion section, each action
The first eigenvector of section constitutes first eigenvector collection, wherein l >=2.
(2) integrated using the first eigenvector as training sample set, the k classification that the training sample is concentrated is named
For:Class 1, class 2 ..., class k, if C is the set that is made up of k sample class;Using between training sample set structure sample class
Relative distance matrix D:
Wherein first and second row represent class i, j label respectively, and the 3rd is classified as the relative distance between class i, j;
(3) two classes i, j for possessing maximum relative distance in C set, and deposit set C respectively are found in D1, C2
In, with season C=C- (C1∪C2);
If, then go to (6)
(4) sample class m (m ∈ C) is searched in D and arrives C respectively1, C2In each sample class minimum relative distance Dmc1And Dmc2,
If Dmc1<Dmc2, then m is added to C1In on the contrary be added to C2In, the operation of this step is repeated, until all sample classes in C is complete
Portion is stored in C1And C2Among.
(5) respectively by C1, C2As the left and right subtree of binary tree network structure, the choosing of a positive and negative class of binary classification is completed
Select;
(6) C=C is made1, step (2) is returned to, left subtree two subtrees are divided further into, until each classification
Untill leaf node as binary tree;
(7) equally, C=C is made2, step (2) is returned to, right subtree is also further divided into two subtrees, until each
Untill classification all leaf nodes as binary tree;
After the binary tree hybrid classifer network structure for obtaining correspondence standardized human body's action data collection, in binary tree structure
Each non-leaf nodes at train a grader.And it is different for the classification demand of sample class according to different nodes, instruct
Practice different SVM classifiers.
Relative distance calculation procedure between the sample class:
If two sample classes, class i and class j.
(1) center of a sample of two sample classes is calculated respectively, and c is designated as respectivelyiAnd cj;
(2) Euclidean distance between two sample classes is calculated, d is designated asij;
(3) the minimal hyper-sphere radius of two sample classes is calculated respectively, and R is designated as respectivelyiAnd Rj;
(4) formula is usedThe relative distance between two sample classes is calculated, D is designated asij;
Center of a sample's computational methods of i-th class are:
Wherein X is the sample set for including k classification, XiFor the training sample set of the i-th class, i=1,2 ..., k, niIt is i-th
The sample size of class, x is training sample XiIn characteristic vector.
Euclidean distance computational methods are between the class:
dij=‖ ci-cj‖
Wherein ci, cjRespectively class i and class j center of a sample.Or be expressed as
WhereinFor the average of p-th of characteristic quantity of all training sample characteristic vectors in sample class i, whereinFor the average of p-th of characteristic quantity of all training sample characteristic vectors in sample class j, l is characterized in vector comprising spy
The number for the amount of levying.
The minimal hyper-sphere radium computing method of i-th class is:
Wherein X is the sample set for including k classification, XiFor the training sample set of the i-th class, wherein ciIn sample for class i
The heart.
The specific implementation of the current recognition unit includes:
Step 21, time domain, frequency domain and the time-frequency domain number of some sections of action datas obtained from the action pretreatment unit
According to middle extraction second feature vector;
Step 22, classification knowledge is carried out to second feature vector using the Binary tree classifier network B T-SVM-NN
Not, it is standard operation or nonstandard action to confirm the corresponding training action of some sections of action datas.
Wherein, the second feature vector includes:
The characteristic value calculated from time domain data, including:From nine axle attitude signals of some sections of action datas
Calculate obtain count and, peak-to-peak value, zero-crossing values, average, mean square deviation, energy, two between centers coefficient correlations, the degree of bias and kurtosis letter
Breath;
The characteristic value calculated from frequency domain data, including:Nine axle attitude signals of some sections of action datas are entered
Fourier coefficient, energy spectral density and the frequency domain entropy obtained after row Fourier transformation;
The characteristic value calculated from time-frequency numeric field data, including:To nine axle attitude signals of some sections of action datas
The wavelet energy ratio on different directions extracted after progress wavelet transformation.
In first eigenvector described in the second feature vector sum, the mean square deviation is standard deviation, the standard deviation side of being
The arithmetic square root of difference, can reflect the dispersion degree of a data set, can be represented by following formula:
Wherein, μ is statistical average, and N is number of samples, xiFor sample, K is mean square deviation.
The degree of bias is the statistical nature for measurement sensor data distribution skew direction and degree, can pass through following formula table
Show:
Wherein, σ is standard deviation,For average, N is number of samples, XiFor sample.
The kurtosis reflects steep of the sensing data at data and curves peak, can be represented by following formula:
Wherein, σ is standard deviation,For average, N is number of samples, XiFor sample.
The two between centers coefficient correlation is the index of linear correlation degree between measurement variable, is that a conventional statistics is special
Levy, can be represented by following formula:
Two variable Xs in the two between centers coefficient correlationiAnd YiThe numerical value of respectively two samples.
Wherein,For sample XiAverage,For sample YiAverage.
The specific implementation of the evaluation unit includes:
Step 31, based on the nonstandard action confirmed in step 22, calculate the nonstandard action some sections are moved
Make the quaternary number of data;
Step 32, the angle information in each joint of the nonstandard action is resolved using quaternary counting method;
Step 33, the angle information and the normal data of corresponding joint in each joint of relatively more described nonstandard action, are obtained
The angular deviation information in each joint of the nonstandard action.
The normal data of corresponding joint is obtained by following steps wherein described in step 31:
Step 31.1, standard exercise action is filtered after denoising, normalized and segment processing, if obtaining butt
This human action, i.e., be divided into some elemental motion sections, each Duan Weiyi elemental motion classification of action by training action;
Step 31.2, based on the basic human action, the quaternion algebra evidence of each limbs of human body is calculated;
Step 31.3, the quaternion algebra evidence of the two neighboring limbs based on human synovial, is calculated using quaternary counting method
The angle information in each joint of human body is the normal data.
Action recognition unit described in the present embodiment using the human action of collection as an action sequence, to the action sequence
Row are segmented, and every section is an elemental motion classification;Recognized using the mixed mode grader network trained every
The action classification of segment, then be compared with standard operation classification sequence, classification is identical to judge the human action as standard, class
Other different decision human action is mistake;Again for being determined as wrong human action, using quaternary counting method, fusion is calculated
Human synovial angle, is compared with each joint angles in the standard operation section of corresponding order, obtains each specific action and closes
The deviation of angle is saved, is finally completed and the human action posture and the movement posture of standard that collect efficiently, is comprehensively compared.
The method of the invention precision is high, and the bias contribution of quantization, which preferably to human action posture can rectify, to be instructed and evaluate partially.
In summary, compared with prior art, present system has following beneficial effect:
(1) wireless action collecting device is worn on user in order to the form of wearing in the present invention, each wearable to set
All connected between standby and main control unit using NRF wireless telecommunications.Both reduced to greatest extent to user's training action
Interference, can effectively gather the training action information of user again.It is wearable wireless compared to the method gathered by image/video
Collecting device have it is more convenient, portable, can comprehensive collection human action information, be difficult affected by environment and relative low price
Etc. advantage.
(2) present invention substantially increases the portability of the system and easy-to-use using Android platform mobile device is based on
Property.With closing portable easy-to-use wearable wireless action acquisition system, pair for alloing user not limited whenever and wherever possible by place
Various human body technical movements are efficiently trained.
(3) present invention uses the Opengl ES 2.0API professional graphic routine interfaces in Android platform.The figure connects
Mouth is directly based upon video card hardware design, and the drafting to 3-D view can be efficiently completed glibly, renders and shows.Greatly improve
Reduction shows the real-time and aesthetic property of user training action posture, while also saving Android device computing resource, carries
The high operation fluency of whole system.
(4) present invention merges human motion identification method and direct contrast joint angle method is assessed user's training and moved jointly
Make, BT-SVM-NN mixed mode grader networks are trained using the training action of standard, the pattern classifier network is reused
The user's training action sequence being identified out is identified and classified to the training action of user, by Student Training's action sequence
With high in the clouds knowledge base Plays database contrast the entry evaluation result for obtaining acting to Student Training, then merge using
Member's action joint angle and the direct comparative result of standard operation joint angle and entry evaluation result, obtain comprehensively final assessment knot
Really.Assessment result obtained by this method can accurately be found student and there is the training action section of deviation with careful, and be marked in the presence of inclined
The specific limb action and extent of deviation (angle) of difference, can rectify guidance partially with careful efficiently provided to student.
(5) present invention uses the training action appearance of cloud database Plays when rectify inclined action director to student
State driving three-dimensional (3 D) manikin constitutes coach's manikin, and another three-dimensional people is driven using the movement posture of the user collected
Body Model constitutes student's manikin, and coach's manikin and student's manikin are used different color additions respectively afterwards
It is shown to together, clearer and more definite more fully demonstrated to student there can be the limb action of deviation compared with split screen display available, also can
Instructed partially with more effectively to student rectify.
Finally, method of the invention is only preferably embodiment, is not intended to limit the scope of the present invention.It is all
Within the spirit and principles in the present invention, any modification, equivalent substitution and improvements made etc. should be included in the protection of the present invention
Within the scope of.
Claims (10)
1. a kind of action auxiliary training system based on motion identification, it is characterised in that auxiliary including action acquisition module and action
Help training module;
The action acquisition module, the limbs posture inertial guidance data of the training action for obtaining user, and be sent to described dynamic
Make supplemental training module;
The action supplemental training module, for based on the limbs posture inertial guidance data, carrying out three-dimensional reduction demonstration, is being carried out
The laggard action of human action identification, which is assessed and acted, rectifys training partially.
2. the system as claimed in claim 1, it is characterised in that the action acquisition module includes collecting unit, main control unit
With the first WIFI units;
The collecting unit, including the wearable device of multiple body parts of user is worn on, wherein single described wearable
Equipment includes:Nine axle inertial navigation sensors, the first processor of single chip computer and the first NRF communication modules, the training for obtaining user
The limbs posture inertial guidance data of action, and it is sent to the main control unit;
The main control unit, including second singlechip processor and the 2nd NRF communication modules and with the multiple body part
The corresponding multiple indicator lamps of wearable device, for by the limbs appearance of the wearable device collection of the multiple body part
State inertial guidance data is sent to the first WIFI units after being integrated;
The first WIFI units, are aided in for the limbs posture inertial guidance data to be transferred into the action by WIFI modes
Training module.
3. the system as claimed in claim 1, it is characterised in that the action supplemental training module includes standard exercise action and drilled
Show unit, action supplemental training unit and the 2nd WIFI units;
The standard exercise acts demonstration unit, and for measured training action joint angles sequence information, driving is three-dimensional
Cartoon manikin demonstrates out the training action of standard;
The action supplemental training unit, for resolving the limbs posture inertial guidance data, obtains the limbs posture joint angle number of degrees
According to progress threedimensional model demonstration;And the limbs posture inertial guidance data is recognized, and acted with the standard exercise of cloud database
Data sequence is contrasted, and obtains the entry evaluation result for training action;And calculate the limbs posture joint angles
The difference of the joint angles data of the standard exercise action data of data and cloud database, obtains the joint of the training action
Angular deviation information;And according to the entry evaluation result and the joint angles deviation information, final assessment result is obtained,
Instructed partially with rectify;
The 2nd WIFI units, for being led to by WIFI modes with the action acquisition module and the cloud database
Letter.
4. the system as claimed in claim 1, it is characterised in that also including cloud database, the cloud database includes mark
Quasi- training action database and User Information Database;
The standard exercise action database, the standard operation data sequence of action, and the standard are trained to for storing
Each joint angles data of action data sequence;
The User Information Database, for obtaining study of the user to different training courses from the action supplemental training module
State, exercise number of times and assessment result, and store.
5. system as claimed in claim 3, it is characterised in that the standard exercise action demonstration unit further comprises:The
One threedimensional model drawing unit and demonstration unit;
The first threedimensional model drawing unit, for the joint angles data according to standard exercise action data, sets up human body
Threedimensional model, and drive the human 3d model to move in three dimensions;
The demonstration unit, for the joint angles data using the standard exercise action data, drives the human body three-dimensional
The motion of model in three dimensions, to demonstrate the training action of standard.
6. system as claimed in claim 3, it is characterised in that the action supplemental training unit further comprises:Action is pre-
Processing unit, action recognition unit, the second threedimensional model drawing unit and evaluation unit;
The action pretreatment unit, for being filtered denoising, normalized to the limbs posture inertial guidance data and dividing
Section processing, obtains some sections of action datas;
The action recognition unit, for extracting characteristic vector to some sections of action datas, passes through the binary tree trained
Grader network is identified, and obtains entry evaluation result;
The second threedimensional model drawing unit, the joint for calculating each limbs according to the limbs posture inertial guidance data
Angle-data sequence, joint angles information frame is built using the joint angles data sequence;Pass through the joint angles information
Frame drives the human 3d model to move in three dimensions;
The evaluation unit, the pass for calculating the joint angles of each limbs and the standard exercise action data of cloud database
The difference of angle-data is saved, the joint angles deviation information of the training action is obtained;According to the joint angles deviation information
Best-evaluated result is provided with the entry evaluation result.
7. system as claimed in claim 2, it is characterised in that in the collecting unit, the multiple limbs for being worn on user
The wearable device of body region is the wearable device of 17 body parts, and 17 body parts include:
Head, left shoulder, right shoulder, left large arm, right large arm, left forearm, right forearm, left hand, the right hand, chest, waist, left thigh, the right side
Thigh, left leg, right leg, left foot and right crus of diaphragm.
8. system as claimed in claim 2, it is characterised in that the limbs posture inertial guidance data bag that the main control unit is integrated
Include:
The x that the x that three axis accelerometer is measured, y, z 3-axis acceleration information, three-axis gyroscope are measured, y, z tri- axis angular rate are believed
X, y, the axle Geomagnetism Informations of z tri- that breath, three axle magnetometers are measured.
9. system as claimed in claim 6, it is characterised in that the joint angles information frame includes:
Neck joint angle, left and right wrist joint angle, left and right Angle of Elbow Joint, right and left shoulders joint angles, waist joint angle, left and right
Hip joint angles, left and right knee joint angle and left and right ankle arthrosis angle.
10. system as claimed in claim 6, it is characterised in that the action recognition unit further comprises:
Build and training grader unit, for being filtered denoising, normalized using standard exercise action data and dividing
Section processing, from the time domain of the data after processing, frequency domain and time-frequency domain extracting data first eigenvector, trains the binary tree
Grader network B T-SVM-NN, each network node of the Binary tree classifier network is two category support vector machines
With arest neighbors hybrid classifer SVM-NN;
Current recognition unit, for extracting second feature vector to some sections of action datas, by second feature vector
Input the Binary tree classifier network B T-SVM-NN to be identified, confirm that the corresponding training of some sections of action datas is dynamic
It is used as standard operation or nonstandard action.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710313765.4A CN107243141A (en) | 2017-05-05 | 2017-05-05 | A kind of action auxiliary training system based on motion identification |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710313765.4A CN107243141A (en) | 2017-05-05 | 2017-05-05 | A kind of action auxiliary training system based on motion identification |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107243141A true CN107243141A (en) | 2017-10-13 |
Family
ID=60017035
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710313765.4A Pending CN107243141A (en) | 2017-05-05 | 2017-05-05 | A kind of action auxiliary training system based on motion identification |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107243141A (en) |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108256433A (en) * | 2017-12-22 | 2018-07-06 | 银河水滴科技(北京)有限公司 | A kind of athletic posture appraisal procedure and system |
CN108339257A (en) * | 2018-03-05 | 2018-07-31 | 安徽传质信息科技有限公司 | A kind of body movement comprehensive training device |
CN108379815A (en) * | 2018-02-02 | 2018-08-10 | 梦卓科技(深圳)有限公司 | The automation training system with Real-time Feedback based on elastic intelligent sensor node |
CN108499107A (en) * | 2018-04-16 | 2018-09-07 | 网易(杭州)网络有限公司 | The control method of virtual role, device and storage medium in virtual reality |
CN108579060A (en) * | 2018-05-15 | 2018-09-28 | 武汉市龙五物联网络科技有限公司 | A kind of kinematic system and its application process |
CN108635806A (en) * | 2018-05-11 | 2018-10-12 | 珠海云麦科技有限公司 | A kind of training acquisition reponse system |
CN108720841A (en) * | 2018-05-22 | 2018-11-02 | 上海交通大学 | Wearable lower extremity movement correction system based on cloud detection |
CN109325466A (en) * | 2018-10-17 | 2019-02-12 | 兰州交通大学 | A kind of smart motion based on action recognition technology instructs system and method |
CN109453498A (en) * | 2018-10-23 | 2019-03-12 | 快快乐动(北京)网络科技有限公司 | A kind of trained auxiliary system and method |
CN109635638A (en) * | 2018-10-31 | 2019-04-16 | 中国科学院计算技术研究所 | For the feature extracting method and system of human motion, recognition methods and system |
CN109658777A (en) * | 2018-12-29 | 2019-04-19 | 陕西师范大学 | Method and system for the teaching control of limb action class |
CN109872283A (en) * | 2019-01-18 | 2019-06-11 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN109875569A (en) * | 2019-03-11 | 2019-06-14 | 南京市江宁医院 | A kind of portable Fetal activity monitoring apparatus |
CN110517558A (en) * | 2019-07-19 | 2019-11-29 | 森兰信息科技(上海)有限公司 | A kind of piano playing fingering evaluation method and system, storage medium and terminal |
CN111772640A (en) * | 2020-07-10 | 2020-10-16 | 深圳市丞辉威世智能科技有限公司 | Limb movement training guidance method and device and storage medium |
CN111870249A (en) * | 2020-06-11 | 2020-11-03 | 华东理工大学 | Human body posture tracking system based on micro inertial sensor and use method thereof |
CN112206480A (en) * | 2020-10-16 | 2021-01-12 | 中新国际联合研究院 | Self-adaptive kicking state identification method and device based on nine-axis sensor |
CN112818927A (en) * | 2021-02-26 | 2021-05-18 | 上海交通大学 | Real-time classification method and system for human body lower limb movement modes |
CN112818879A (en) * | 2021-02-05 | 2021-05-18 | 四川大学 | Multi-action early recognition method and system based on partial sequence |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050143183A1 (en) * | 2003-12-26 | 2005-06-30 | Yoshiaki Shirai | Golf swing diagnosis system |
CN104008378A (en) * | 2014-06-11 | 2014-08-27 | 大连理工大学 | Passenger number counting method based on behavior characteristics |
CN104268577A (en) * | 2014-06-27 | 2015-01-07 | 大连理工大学 | Human body behavior identification method based on inertial sensor |
US20150324481A1 (en) * | 2014-05-06 | 2015-11-12 | International Business Machines Corporation | Building Entity Relationship Networks from n-ary Relative Neighborhood Trees |
CN205028239U (en) * | 2014-12-10 | 2016-02-10 | 杭州凌手科技有限公司 | Interactive all -in -one of virtual reality intelligence projection gesture |
CN105635669A (en) * | 2015-12-25 | 2016-06-01 | 北京迪生数字娱乐科技股份有限公司 | Movement contrast system based on three-dimensional motion capture data and actually photographed videos and method thereof |
US9360932B1 (en) * | 2012-08-29 | 2016-06-07 | Intellect Motion Llc. | Systems and methods for virtually displaying real movements of objects in a 3D-space by means of 2D-video capture |
CN106264520A (en) * | 2016-07-27 | 2017-01-04 | 深圳先进技术研究院 | A kind of neural feedback athletic training system and method |
CN106295531A (en) * | 2016-08-01 | 2017-01-04 | 乐视控股(北京)有限公司 | A kind of gesture identification method and device and virtual reality terminal |
CN106419930A (en) * | 2016-11-30 | 2017-02-22 | 深圳市酷浪云计算有限公司 | Sport and health management platform and sports intelligence equipment |
-
2017
- 2017-05-05 CN CN201710313765.4A patent/CN107243141A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050143183A1 (en) * | 2003-12-26 | 2005-06-30 | Yoshiaki Shirai | Golf swing diagnosis system |
US9360932B1 (en) * | 2012-08-29 | 2016-06-07 | Intellect Motion Llc. | Systems and methods for virtually displaying real movements of objects in a 3D-space by means of 2D-video capture |
US20150324481A1 (en) * | 2014-05-06 | 2015-11-12 | International Business Machines Corporation | Building Entity Relationship Networks from n-ary Relative Neighborhood Trees |
CN104008378A (en) * | 2014-06-11 | 2014-08-27 | 大连理工大学 | Passenger number counting method based on behavior characteristics |
CN104268577A (en) * | 2014-06-27 | 2015-01-07 | 大连理工大学 | Human body behavior identification method based on inertial sensor |
CN205028239U (en) * | 2014-12-10 | 2016-02-10 | 杭州凌手科技有限公司 | Interactive all -in -one of virtual reality intelligence projection gesture |
CN105635669A (en) * | 2015-12-25 | 2016-06-01 | 北京迪生数字娱乐科技股份有限公司 | Movement contrast system based on three-dimensional motion capture data and actually photographed videos and method thereof |
CN106264520A (en) * | 2016-07-27 | 2017-01-04 | 深圳先进技术研究院 | A kind of neural feedback athletic training system and method |
CN106295531A (en) * | 2016-08-01 | 2017-01-04 | 乐视控股(北京)有限公司 | A kind of gesture identification method and device and virtual reality terminal |
CN106419930A (en) * | 2016-11-30 | 2017-02-22 | 深圳市酷浪云计算有限公司 | Sport and health management platform and sports intelligence equipment |
Cited By (25)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108256433A (en) * | 2017-12-22 | 2018-07-06 | 银河水滴科技(北京)有限公司 | A kind of athletic posture appraisal procedure and system |
CN108379815A (en) * | 2018-02-02 | 2018-08-10 | 梦卓科技(深圳)有限公司 | The automation training system with Real-time Feedback based on elastic intelligent sensor node |
CN108339257B (en) * | 2018-03-05 | 2021-04-13 | 贵港市瑞成科技有限公司 | Comprehensive training device for body movement |
CN108339257A (en) * | 2018-03-05 | 2018-07-31 | 安徽传质信息科技有限公司 | A kind of body movement comprehensive training device |
CN108499107A (en) * | 2018-04-16 | 2018-09-07 | 网易(杭州)网络有限公司 | The control method of virtual role, device and storage medium in virtual reality |
CN108499107B (en) * | 2018-04-16 | 2022-02-25 | 网易(杭州)网络有限公司 | Control method and device for virtual role in virtual reality and storage medium |
CN108635806A (en) * | 2018-05-11 | 2018-10-12 | 珠海云麦科技有限公司 | A kind of training acquisition reponse system |
CN108579060A (en) * | 2018-05-15 | 2018-09-28 | 武汉市龙五物联网络科技有限公司 | A kind of kinematic system and its application process |
CN108720841A (en) * | 2018-05-22 | 2018-11-02 | 上海交通大学 | Wearable lower extremity movement correction system based on cloud detection |
CN109325466B (en) * | 2018-10-17 | 2022-05-03 | 兰州交通大学 | Intelligent motion guidance system and method based on motion recognition technology |
CN109325466A (en) * | 2018-10-17 | 2019-02-12 | 兰州交通大学 | A kind of smart motion based on action recognition technology instructs system and method |
CN109453498A (en) * | 2018-10-23 | 2019-03-12 | 快快乐动(北京)网络科技有限公司 | A kind of trained auxiliary system and method |
CN109635638B (en) * | 2018-10-31 | 2021-03-09 | 中国科学院计算技术研究所 | Feature extraction method and system and recognition method and system for human body motion |
CN109635638A (en) * | 2018-10-31 | 2019-04-16 | 中国科学院计算技术研究所 | For the feature extracting method and system of human motion, recognition methods and system |
CN109658777A (en) * | 2018-12-29 | 2019-04-19 | 陕西师范大学 | Method and system for the teaching control of limb action class |
CN109872283A (en) * | 2019-01-18 | 2019-06-11 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN109875569A (en) * | 2019-03-11 | 2019-06-14 | 南京市江宁医院 | A kind of portable Fetal activity monitoring apparatus |
CN110517558A (en) * | 2019-07-19 | 2019-11-29 | 森兰信息科技(上海)有限公司 | A kind of piano playing fingering evaluation method and system, storage medium and terminal |
CN111870249A (en) * | 2020-06-11 | 2020-11-03 | 华东理工大学 | Human body posture tracking system based on micro inertial sensor and use method thereof |
CN111772640A (en) * | 2020-07-10 | 2020-10-16 | 深圳市丞辉威世智能科技有限公司 | Limb movement training guidance method and device and storage medium |
CN111772640B (en) * | 2020-07-10 | 2023-09-29 | 深圳市丞辉威世智能科技有限公司 | Limb exercise training guiding method, device and storage medium |
CN112206480A (en) * | 2020-10-16 | 2021-01-12 | 中新国际联合研究院 | Self-adaptive kicking state identification method and device based on nine-axis sensor |
CN112206480B (en) * | 2020-10-16 | 2022-05-24 | 中新国际联合研究院 | Self-adaptive kicking state identification method and device based on nine-axis sensor |
CN112818879A (en) * | 2021-02-05 | 2021-05-18 | 四川大学 | Multi-action early recognition method and system based on partial sequence |
CN112818927A (en) * | 2021-02-26 | 2021-05-18 | 上海交通大学 | Real-time classification method and system for human body lower limb movement modes |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107243141A (en) | A kind of action auxiliary training system based on motion identification | |
CN105868715B (en) | Gesture recognition method and device and gesture learning system | |
WO2018120964A1 (en) | Posture correction method based on depth information and skeleton information | |
CN109011508A (en) | A kind of intelligent coach system and method | |
CN109597485B (en) | Gesture interaction system based on double-fingered-area features and working method thereof | |
CN104834384B (en) | Improve the device and method of exercise guidance efficiency | |
CN106203503B (en) | A kind of action identification method based on bone sequence | |
CN107292813A (en) | A kind of multi-pose Face generation method based on generation confrontation network | |
Hailong | Role of artificial intelligence algorithm for taekwondo teaching effect evaluation model | |
CN107423730A (en) | A kind of body gait behavior active detecting identifying system and method folded based on semanteme | |
CN107961524A (en) | Body-building game and training system based on AR | |
CN109325466B (en) | Intelligent motion guidance system and method based on motion recognition technology | |
CN106166376A (en) | Simplify taijiquan in 24 forms comprehensive training system | |
CN102622916A (en) | Human body acupuncture point projection demonstration method and device | |
CN107293175A (en) | A kind of locomotive hand signal operation training method based on body-sensing technology | |
CN107533806A (en) | It is configured as realizing framework, the apparatus and method of the transmission to the interaction skill training content including the content with multiple selectable expertise changes | |
CN111383735A (en) | Unmanned body-building analysis method based on artificial intelligence | |
CN110110647A (en) | The method, apparatus and storage medium that information is shown are carried out based on AR equipment | |
CN112749684A (en) | Cardiopulmonary resuscitation training and evaluating method, device, equipment and storage medium | |
Xie et al. | Visual feedback for core training with 3d human shape and pose | |
CN113663312A (en) | Micro-inertia-based non-apparatus body-building action quality evaluation method | |
Chen | Research on college physical education model based on virtual crowd simulation and digital media | |
CN111539364B (en) | Multi-somatosensory human behavior recognition algorithm based on feature fusion and multi-classifier voting | |
CN107272593A (en) | A kind of robot body-sensing programmed method based on Kinect | |
CN105046193B (en) | A kind of human motion recognition method based on fusion rarefaction representation matrix |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20171013 |
|
RJ01 | Rejection of invention patent application after publication |