CN106650687A - Posture correction method based on depth information and skeleton information - Google Patents
Posture correction method based on depth information and skeleton information Download PDFInfo
- Publication number
- CN106650687A CN106650687A CN201611251820.3A CN201611251820A CN106650687A CN 106650687 A CN106650687 A CN 106650687A CN 201611251820 A CN201611251820 A CN 201611251820A CN 106650687 A CN106650687 A CN 106650687A
- Authority
- CN
- China
- Prior art keywords
- bone
- skeleton
- information
- vector
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Abstract
The invention relates to a posture correction method based on depth information and skeleton information, and the method comprises the steps: (1), carrying out the effective skeleton point screening based on a user ID and a skeleton space coordinate depth value; (2), carrying out the smoothing of the skeleton information, and carrying out the coordinate standardization of skeleton points in a skeleton space coordinate system; (3), drawing skeleton vectors, and calculating a direction cosine value of each skeleton vector; (4), obtaining a training data set and a testing data set; (5), enabling the data sets to serve as the input of a Bayes regularization BP neural network, and carrying out the recognition of 25 skeleton points and 24 skeleton segments through the Bayes regularization BP neural network; (6), carrying out the analysis and processing of a result. The method irons out the defect that a color camera is liable to be affected by light, can accurately track a human body in a visual field, and also improves the naturalness of man-machine interaction.
Description
Technical field
The present invention relates to it is a kind of based on depth information and the posture correcting method of bone information, belong to Intellisense and intelligence
Computing technique field.
Background technology
In daily life, correcting posture is applied to numerous areas, for example, medical science aspect, can be used for parkinsonism,
The rehabilitation of the motor disorders such as muscle cramp;Education aspect, can be used for the religion of the projects such as basketball, rhythmic gymnastics, dancing
Learn;Body-building recreation aspect, the figure that can be used for the projects such as Yoga, pilates corrects, and helps body builder to reach expected body-building effect
Really.It is reported that, in the Asian more than 40 years old, just there is 1 people to suffer from dyskinetic syndrome in every 3 people
(Locomotive Syndrome), rational exercise therapy is the requisite rehabilitation maneuver of this kind of illness, and for gymnastics,
The projects such as Yoga, pilates, the quality that action is completed can affect the effect trained in the later stage, the action of mistake to be also possible to
Bone can be caused to misplace or pulled muscle.So in order to realize more preferable medical rehabilitation effect and reduce body unnecessary in training
Body is injured, and a kind of intelligence of research, efficient posture supervision antidote are particularly important.
Medical aspect, traditional posture correcting method is required for accompanying and guidance for professional, wastes many manpower things
Also make participant feel dry as dust while power, tend not to reach expected rehabilitation effect.In addition, there will be some research aircrafts
Structure is applied to Sony's somatosensory device EyeToy and Nintendo Wii in limb motion rehabilitation training, but its two dimensional image process
Limitation constrains development of the technique in rehabilitation field.
The Kinect somatosensory device that Microsoft issues has the functions such as instant 3D motion-captured, Mike's input and audio-visual identification, special
It is not that second generation Kinect somatosensory device provides optimization version bone tracking function, the depth camera for strengthening fidelity combines modified version
Software, brings a series of bone tracking functions and is lifted, and except following the trail of 6 sets of complete bones now, (first generation Kinect is most
It is follow the trail of 2 sets more) and 25 skeleton points of each target (first generation Kinect is 20) outside, automatic tracing positioning is also than upper one
In generation, is more precise and stable, and tracking range is also bigger.
The content of the invention
For the deficiency of existing method, the present invention proposes a kind of based on depth information and the correcting posture side of bone information
Method.
The present invention obtains the advantage of user's depth data and skeleton data using Kinect cameras, it is intended to which exploitation is a kind of anti-
Interference performance is strong, convenient practicability and effectiveness posture correcting method.The method includes:1) data acquisition:Gathered using Kinect2.0
Depth data and skeleton data (25 skeleton points with 6 human bodies can be gathered) in the range of sensor field of view, based on user
ID (the unique bones of Kinect track ID, for distributing to the visual field in each user, to distinguish present this skeleton data
Which user) and bone space coordinates depth value carry out skeleton point screening;2) data prediction:To the smooth place of skeleton data
The standardization of coordinates of skeleton point is drawn bone vector by reason according to human structurology principle;3) Feature extraction and recognition:Calculate bone
The direction cosines value of bone vector, and 3 direction cosines values of each bone vector are extracted as feature, and it is input to shellfish
Leaf this regularization BP neural network further recognizes that finally recognition result is analyzed and process, and result is displayed in use
On the interface of family, the bone section and skeleton point of standard show that off-gauge bone section and skeleton point are shown simultaneously with azarin with BG
Pointed out using voice.
The present invention improves practicality, accuracy and the robustness of correcting posture.
The technical scheme is that:
Term is explained:
1st, BP (Back Propagation) neutral net is 1986 by the section headed by Rumelhart and McCelland
Scholar group proposes, is a kind of Multi-layered Feedforward Networks trained by Back Propagation Algorithm, is current most widely used nerve
One of network model.BP neural network can learn and store substantial amounts of input-output mode mapping relations, and without the need for taking off in advance
Show the math equation for describing this mapping relations.Its learning rules are to use steepest descent method, by backpropagation come constantly
The weights and threshold value of adjustment network, make the error sum of squares of network minimum.BP neural network model topology structure includes input layer
(input layer), hidden layer (hidden layer) and output layer (output layer).
2nd, depth data stream, refers to a series of depth datas of kinect Real-time Collections;
3rd, color data stream, refers to a series of color datas of kinect Real-time Collections;
It is a kind of based on depth information and the posture correcting method of bone information, concrete steps include:
(1) the effective skeleton point based on ID and bone space coordinates depth value is screened:Using Kinect2.0 cameras
Obtain user's skeleton data and depth data, all skeleton points of selection target user;ID refers to that Kinect2.0 sensings set
Standby unique usertracking mark, is which is used to distinguish bone information for distributing to each user in effective sighting distance
Family;
(2) skeleton data that step (1) is selected is smoothed, by seat of the skeleton point in bone space coordinates
Mark standardization;Reduce the skeleton point position difference between skeletal frame, kinect developing instrument kinect for windows sdk
Data object type provided in the form of skeletal frame, each frame include 25 skeleton points.
(3) according to the standardized skeleton point of bone space coordinates being had been carried out in step (2) in bone space coordinates
Coordinate, draw bone vector, calculate the direction cosines value of each bone vector, these direction cosines values are carried out as feature
Extract;
(4) training dataset and test data set are obtained;These data sets are the direction cosines values of user's bone vector,
Each frame skeletal frame includes 24 bone vectors, and each bone vector includes three direction cosines values again;
(5) training dataset and test data set of step (4) acquisition is used as the defeated of Bayesian Regulated Neural Networks
Enter, 25 skeleton points and 24 bone sections are identified by Bayesian Regulated Neural Networks;
(6) interpretation of result and process, the analysis of each bone section and skeleton point and result are included in real time in user
On interface.The skeleton data frame of targeted customer can in real time be presented on user in effective sighting distance that Kinect2.0 cameras are captured
On interface, the bone section corresponding to bone vector matched with standard gestures and skeleton point can present bright green, and and appearance
The vectorial corresponding bone section of the bone of gesture mistake and skeleton point can present shiny red, while can be with voice message.
According to currently preferred, the step (1), comprise the following steps:
A, skeleton data and depth data that all objects in effective sighting distance are obtained using Kinect2.0 cameras, it is described
Skeleton data refers to coordinate of the skeleton point in bone space coordinates;Bone space coordinates are referred to:With Kinect2.0 shootings
Head is origin, and Z axis are consistent with the direction of Kinect2.0 cameras, and Y-axis positive axis is upwardly extended, and X-axis positive axis to Kinect feels
The visual angle for answering device extends;Effective sighting distance refers to that Kinect2.0 cameras can correctly gather the field range of information, described effective
The span of sighting distance is 0.8-3.0m;The skeleton point includes:Head, neck, right hand forefinger, hand thumb, the right hand centre of the palm,
Right finesse, right ancon, right shoulder, shoulder center, left shoulder, left ancon, left finesse, the left hand centre of the palm, left hand thumb, left index finger, ridge
Post, Center of Coxa, right hip, left hip, right knee, left knee, right ankle, left ankle, right crus of diaphragm, left foot totally 25 skeleton points;
B, the skeleton point for determining targeted customer, filter to the skeleton point of other users:Using Kinect2.0 cameras
The depth value of the skeleton point of each user is obtained, and is added up, calculate the average depth of all skeleton points of each user
Angle value, contrasts these average depth values, and the minimum user of average depth value is targeted customer, preserves the bone of the targeted customer
Point, filters the skeleton point of other users.Still there are multiple users in effective sighting distance, after just occurring that many invalid skeleton points affect
Other invalid skeleton points, it would therefore be desirable to determine the skeleton point of target body, are filtered by the precision of phase characteristics extraction
Remove.
Kinect2.0 can simultaneously follow the trail of 25 skeleton points with 6 targeted customers and within sweep of the eye object
Depth data stream and color data stream, the bone API in Kinect for windows sdk can be provided before Kinect
The just positional information of at most 5 people, including the three-dimensional coordinate information of detailed posture and skeleton point also has user's id information, data
Object type is provided in the form of skeletal frame, and each frame can preserve 25 skeleton points.
According to the present invention, the step (2), including step is as follows:
C, setting smooth value (Smoothing) attribute, the span of the smooth value is the floating type number between 0 to 1
According to smooth value is bigger, and smooth is more, and smooth value is that 0 expression is not smoothed;
Correction value (Correction) attribute is set, and the span of the correction value is the floating type number between 0 to 1
According to correction value is less, and bone information is more smooth;
Shake radius (JitterRadius) attribute is set, and the span of the shake radius is the floating-point between 0 to 1
Type data, when skeleton point shake has exceeded the shake radius for arranging, it will be corrected within this shake radius;
Maximum boundary (MaxDeviationRadius) attribute of shake radius is set, and span is floating between 0 to 1
Point-type data, all without being considered to shake what is produced, and it is one to be identified as the point of any maximum boundary more than shake radius
Individual new point;
Prediction frame sign (Prediction) attribute is set, and span is the real-coded GA between 0 to 1, default value
For 0;
D, call kinect skeleton datas smoothing algorithm i.e. in transmission step C in SkeletonStream.Enable ()
Each smoothing parameter skeleton data is smoothed.
Skeleton data is smoothed can produce performance cost.Smoothing processing it is more, performance consumption is bigger.Arrange
Each smoothing parameter does not have experience to follow.Constantly test and debugging is needed to reach best performance and effect.In journey
The different phase of sort run, it may be necessary to which different smoothing parameters are set.
During skeleton point tracking, some situations can cause skeleton motion to present the change of great-jump-forward.For example, target
The action of user not enough links up, Kinect hardware performance differences etc..The relative position of skeleton point may change between frames
Very big, this can produce some negative impacts to application program, for example, can affect Consumer's Experience and cause unexpected etc. to control.
By smoothing to skeleton data, the standardization of coordinates of skeleton point is reduced the skeleton point position difference between frame and frame.
It is as follows according to currently preferred, the step (3), including step:
E, according to human structurology principle, for 25 skeleton points that step (2) is extracted, successively by two adjacent bones
Bone point connection one bone section of composition, and this bone section is defined as into a bone vector, 24 bone vectors are obtained,
I.e.:
F, set arbitrary bone vectorBe coordinate in bone space coordinates be (x1,y1,z1) skeleton point with bone
Coordinate in bone space coordinates is (x2,y2,z2) skeleton point connection gained,Bone vector
Represent as shown in formula (I):
G, set bone vectorα, β and γ are respectively with the angle of bone space coordinates X-axis, Y-axis and Z axis, thus, can
Obtain bone vectorThree direction cosines values, shown in computing formula such as formula (II), formula (III) and formula (IV):
By above-mentioned algorithm, calculate three direction cosines values of 24 bones vectors successively, and using these cosine values as
Feature is extracted.
When user makes different gestures, for human body each bone section, all with different positions and angle
Information, therefore, it can characterize a certain class action using the direction cosines value of 24 bones vector of definition.
It is as follows according to currently preferred, the step (4), including step:
H, acquisition training dataset:Targeted customer completes a series of standard operation under the guidance of professional, passes through
To step (3), Kinect2.0 cameras are obtained more than three directions of the corresponding 24 bones vector of each action step (1)
String value, i.e. training dataset;
I, acquisition test data set:Targeted customer completes alone a series of dynamic in the effective sighting distance of Kinect2.0 cameras
Make, by step (1) to step (3), Kinect2.0 cameras obtain three of the corresponding 24 bones vector of each action
Direction cosines value, i.e. test data set.
Correcting posture early stage, user can complete a series of standard operation under the guidance of professional, and Kinect2.0 takes the photograph
As head can record these actions in the form of skeletal frame, then through skeleton point screening, skeleton data smoothing processing, bone to
The direction cosines characteristics extraction of amount, ultimately produces the training dataset required for us, a total of some Frames of training set
(the more precision of frame number are higher), 24 bone vectors of each frame skeleton data comprising a posture, each bone vector has three
Individual direction cosines value;Correcting posture stage, user can alone complete within sweep of the eye a series of actions in Kinect, and Kinect is same
Sample can record these possible nonstandard actions in the form of skeletal frame, then through the screening of corresponding skeleton point, bone number
According to smoothing processing, the direction cosines characteristics extraction of bone vector, the test data set required for us is ultimately produced.
It is as follows according to currently preferred, the step (5), including step:
G, the BP neural network number of plies is determined for 3, the nodes for determining input layer are 3, and the nodes for determining output layer are 1,
And the nodes m of hidden layer is determined by formula (V), formula (V) is as follows:
In formula (V), n is the nodes of input layer;L is the nodes of output layer;α is the integer between 1-10;
By the study to BP neural network, for simple action recognition problem is capable of achieving by single hidden layer network,
So here selects three layers of BP neural network.The nodes of input layer and output layer, usually according to practical problem, are become by input
The dimension of amount and output variable determines.BP neural network input variable is three-dimensional cosine characteristic value, so the section of input layer
Count as 3, the bone Vectors matching of each bone vector of output variable and standard gestures whether i.e. 1 or 0, so output node layer
Number is 1.Node in hidden layer affects very big on network performance, and nodes are different, and final result will be widely different, and nodes are too
Few, quickly, but the modeling of network is insufficient, causes network performance poor for the iterative rate of network;Nodes are too many, can cause net
Network complex structure, amount of calculation increase, the training time is longer, so to select suitable nodes.
K, Bayesian Regulated Neural Networks are input to using training dataset as training set, to each bone vector
Training study, in the correcting posture stage, test data set is input to and trains the Bayesian Regulated Neural Networks for finishing
In, the correctness of each bone vector is identified, i.e., to the corresponding bone vector of the standard bone posture of training set
With whether.
Beneficial effects of the present invention are:
1 is different from conventional color camera, and Kinect sensor can provide third dimension depth data.It can overcome coloured silk
Color camera easily by the shortcoming of the external interferences such as light, accurately tracks human body within the vision, while also improving man-machine
Interactive naturality.
2nd, Kinect can extract skeleton information, when user makes different actions, corresponding skeleton point and bone
Section has different positions and an angle information, and these information will provide very reliable and direct method and know realizing human body attitude
Not.
3rd, Kinect2.0 is the complete upgrading to first generation Kinect sensor, bone tracking aspect it is traceable 6 sets completely
Bone (first generation Kinect at most follows the trail of 2 sets) and 25 skeleton points (first generation Kinect is 20) with everyone, and
And automatic tracing positioning function is also more precise and stable than previous generation, and tracking range is also bigger, can be big using Kinect2.0
The big degree of accuracy for improving gesture recognition.
4th, traditional feature extracting method is needed using complicated mathematical algorithm, realizes that difficult and operational efficiency is not high,
The angle information that the present invention directly studies skeleton is then more convenient directly perceived, while also improving the speed of service of program.
5th, there are many defects in BP neural network, and the generalization ability that especially Expired Drugs cause declines exception substantially,
Regularization algorithms can effectively suppress Expired Drugs, with higher generalization ability.
6th, targeted customer is positioned using ID and bone space coordinates depth value, improves the standard of bone tracking
True property.
7th, the present invention is identified correction to 25 skeleton points of human body, the action of 24 bone sections, improves appearance
The correctness and robustness of gesture correction.
Description of the drawings
Fig. 1 is the present invention based on depth information and the FB(flow block) of the posture correcting method of bone information;
Fig. 2 is skeleton point screening process schematic diagram of the present invention;
Fig. 3 is 24 bones vector schematic diagram of present invention definition;
Fig. 4 is the bone vector of the definition of embodiment 1Direction cosines schematic diagram;
Fig. 5 is direction cosines characteristics extraction schematic flow sheet of the present invention.
Specific embodiment
The present invention is further qualified with reference to Figure of description and embodiment, but not limited to this.
Embodiment 1
It is a kind of based on depth information and the posture correcting method of bone information, as shown in figure 1, concrete steps include:
(1) the effective skeleton point based on ID and bone space coordinates depth value is screened:Using Kinect2.0 cameras
Obtain user's skeleton data and depth data, all skeleton points of selection target user;ID refers to that Kinect2.0 sensings set
Standby unique usertracking mark, is which is used to distinguish bone information for distributing to each user in effective sighting distance
Family;As shown in Fig. 2 comprising the following steps:
A, skeleton data and depth data that all objects in effective sighting distance are obtained using Kinect2.0 cameras, it is described
Skeleton data refers to coordinate of the skeleton point in bone space coordinates;Bone space coordinates are referred to:With Kinect2.0 shootings
Head is origin, and Z axis are consistent with the direction of Kinect2.0 cameras, and Y-axis positive axis is upwardly extended, and X-axis positive axis to Kinect feels
The visual angle for answering device extends;Effective sighting distance refers to that Kinect2.0 cameras can correctly gather the field range of information, described effective
The span of sighting distance is 0.8-3.0m;The skeleton point includes:Head, neck, right hand forefinger, hand thumb, the right hand centre of the palm,
Right finesse, right ancon, right shoulder, shoulder center, left shoulder, left ancon, left finesse, the left hand centre of the palm, left hand thumb, left index finger, ridge
25 skeleton points such as post, Center of Coxa, right hip, left hip, right knee, left knee, right ankle, left ankle, right crus of diaphragm, left foot;
B, the skeleton point for determining targeted customer, filter to the skeleton point of other users:Using Kinect2.0 cameras
The depth value of each user's skeleton point is obtained, and is added up, calculate the mean depth of all skeleton points of each user
Value, contrasts these average depth values, and the minimum user of average depth value is targeted customer, preserves the skeleton point of the targeted customer,
Filter the skeleton point of other users.Still there are multiple users in effective sighting distance, just occur that many invalid skeleton points affect the later stage
Other invalid skeleton points, it would therefore be desirable to determine the skeleton point of target body, are filtered by the precision of characteristics extraction.
Kinect2.0 can simultaneously follow the trail of 25 skeleton points with 6 targeted customers and within sweep of the eye object
Depth data stream and color data stream, the bone API in Kinect for windows sdk can be provided before Kinect
The just positional information of at most 5 people, including the three-dimensional coordinate information of detailed posture and skeleton point also has user's id information, data
Object type is provided in the form of skeletal frame, and each frame can preserve 25 skeleton points.
(2) skeleton data that step (1) is selected is smoothed, by seat of the skeleton point in bone space coordinates
Mark standardization;Reduce the skeleton point position difference between skeletal frame, kinect developing instrument kinect for windows sdk
Data object type provided in the form of skeletal frame, each frame include 25 skeleton points.It is as follows including step:
C, setting smooth value (Smoothing) attribute, the span of the smooth value is the floating type number between 0 to 1
According to smooth value is bigger, and smooth is more, and smooth value is that 0 expression is not smoothed;
Correction value (Correction) attribute is set, and the span of the correction value is the floating type number between 0 to 1
According to correction value is less, and bone information is more smooth;
Shake radius (JitterRadius) attribute is set, and the span of the shake radius is the floating-point between 0 to 1
Type data, when skeleton point shake has exceeded the shake radius for arranging, it will be corrected within this shake radius;
Maximum boundary (MaxDeviationRadius) attribute of shake radius is set, and span is floating between 0 to 1
Point-type data, all without being considered to shake what is produced, and it is one to be identified as the point of any maximum boundary more than shake radius
Individual new point;
Prediction frame sign (Prediction) attribute is set, and span is the real-coded GA between 0 to 1, default value
For 0;
D, call kinect skeleton datas smoothing algorithm i.e. in transmission step C in SkeletonStream.Enable ()
Each smoothing parameter is smoothed to skeleton data.
Skeleton data is smoothed can produce performance cost.Smoothing processing it is more, performance consumption is bigger.Arrange
Each smoothing parameter does not have experience to follow.Constantly test and debugging is needed to reach best performance and effect.In journey
The different phase of sort run, it may be necessary to which different smoothing parameters are set.
During skeleton point tracking, some situations can cause skeleton motion to present the change of great-jump-forward.For example, target
The action of user not enough links up, Kinect hardware performance differences etc..The relative position of skeleton point may change between frames
Very big, this can produce some negative impacts to application program, for example, can affect Consumer's Experience and cause unexpected etc. to control.
By smoothing to skeleton data, the standardization of coordinates of skeleton point is reduced the skeleton point position difference between frame and frame.
(3) according to the standardized skeleton point of bone space coordinates being had been carried out in step (2) in bone space coordinates
Coordinate, draw bone vector, calculate the direction cosines value of each bone vector, these direction cosines values are carried out as feature
Extract;It is as follows including step:
E, according to human structurology principle, for 25 skeleton points that step (2) is extracted, successively by two adjacent bones
Bone point connection one bone section of composition, and this bone section is defined as into a bone vector, 24 bone vectors are obtained,
I.e.:As shown in Figure 3;
F, set the shoulder joint of the right hand to this bone vector of elbow joint asAs shown in figure 4, the three-dimensional coordinate of shoulder joint is
(x1,y1,z1), the three-dimensional coordinate of elbow joint is (x2,y2,z2), bone vectorRepresent as shown in formula (I):
G, set bone vectorα, β and γ are respectively with the angle of bone space coordinates X-axis, Y-axis and Z axis, thus, can
Obtain bone vectorThree direction cosines values, shown in computing formula such as formula (II), formula (III) and formula (IV):
By above-mentioned algorithm, calculate three direction cosines values of 24 bones vectors successively, and using these cosine values as
Feature is extracted, as shown in Figure 5.
When user makes different gestures, for human body each bone section, all with different positions and angle
Information, therefore, it can characterize a certain class action using the direction cosines value of 24 bones vector of definition.
(4) training dataset and test data set are obtained;These data sets are the direction cosines values of user's bone vector,
Each frame skeletal frame includes 24 groups of bone vectors, and each bone vector includes three direction cosines values again;It is as follows including step:
H, acquisition training dataset:Targeted customer completes a series of standard operation under the guidance of professional, passes through
To step (3), Kinect2.0 cameras are obtained more than three directions of the corresponding 24 bones vector of each action step (1)
String value, i.e. training dataset;
I, acquisition test data set:Targeted customer completes alone a series of dynamic in the effective sighting distance of Kinect2.0 cameras
Make, by step (1) to step (3), Kinect2.0 cameras obtain three of the corresponding 24 bones vector of each action
Direction cosines value, i.e. test data set.
Correcting posture early stage, user can complete a series of standard operation under the guidance of professional, and Kinect2.0 takes the photograph
As head can record these actions in the form of skeletal frame, then through skeleton point screening, skeleton data smoothing processing, bone to
The direction cosines characteristics extraction of amount, ultimately produces the training dataset required for us, a total of some Frames of training set
(the more precision of frame number are higher), 24 bone vectors of each frame skeleton data comprising a posture, each bone vector has three
Individual direction cosines value;Correcting posture stage, user can alone complete within sweep of the eye a series of actions in Kinect, and Kinect is same
Sample can record these possible nonstandard actions in the form of skeletal frame, then through the screening of corresponding skeleton point, bone number
According to smoothing processing, the direction cosines characteristics extraction of bone vector, the test data set required for us is ultimately produced.
(5) training dataset and test data set of step (4) acquisition is used as the defeated of Bayesian Regulated Neural Networks
Enter, 25 skeleton points and 24 bone sections are identified by Bayesian Regulated Neural Networks;It is as follows including step:
G, the BP neural network number of plies is determined for 3, the nodes for determining input layer are 3, and the nodes for determining output layer are 1,
And the nodes m of hidden layer is determined by formula (V), formula (V) is as follows:
In formula (V), n is the nodes of input layer;L is the nodes of output layer;α is the integer between 1-10;
By the study to BP neural network, for simple action recognition problem is capable of achieving by single hidden layer network,
So here selects three layers of BP neural network.The nodes of input layer and output layer, usually according to practical problem, are become by input
The dimension of amount and output variable determines.BP neural network input variable is three-dimensional cosine characteristic value, so the section of input layer
Count as 3, the bone Vectors matching of each bone vector of output variable and standard gestures whether i.e. 1 or 0, so output node layer
Number is 1.Node in hidden layer affects very big on network performance, and nodes are different, and final result will be widely different, and nodes are too
Few, quickly, but the modeling of network is insufficient, causes network performance poor for the iterative rate of network;Nodes are too many, can cause net
Network complex structure, amount of calculation increase, the training time is longer, so to select suitable nodes.
K, Bayesian Regulated Neural Networks are input to using training dataset as training set, to each bone vector
Training study, in the correcting posture stage, test data set is input to and trains the Bayesian Regulated Neural Networks for finishing
In, the correctness of each bone vector is identified, i.e., to the corresponding bone vector of the standard bone posture of training set
With whether.
(6) interpretation of result and process, the analysis of each bone section and skeleton point and result are included in real time in user
On interface.The skeleton data frame of targeted customer can in real time be presented on user in effective sighting distance that Kinect2.0 cameras are captured
On interface, the bone section corresponding to bone vector matched with standard gestures and skeleton point can present bright green, and and appearance
The vectorial corresponding bone section of the bone of gesture mistake and skeleton point can present shiny red, while can be with voice message.
Claims (6)
1. it is a kind of based on depth information and the posture correcting method of bone information, it is characterised in that concrete steps include:
(1) the effective skeleton point based on ID and bone space coordinates depth value is screened:Obtained using Kinect2.0 cameras
User's skeleton data and depth data, all skeleton points of selection target user;ID refers to Kinect2.0 sensing equipments only
One usertracking mark, for distributing to each user in effective sighting distance, to distinguish which user bone information is;
(2) skeleton data that step (1) is selected is smoothed, the coordinate mark by skeleton point in bone space coordinates
Standardization;
(3) according to having been carried out seat of the standardized skeleton point of bone space coordinates in bone space coordinates in step (2)
Mark, draws bone vector, calculates the direction cosines value of each bone vector, and these direction cosines values are carried as feature
Take;
(4) training dataset and test data set are obtained;
(5) training dataset and test data set of step (4) acquisition leads to as the input of Bayesian Regulated Neural Networks
Cross Bayesian Regulated Neural Networks to be identified 25 skeleton points and 24 bone sections;
(6) interpretation of result and process, the analysis of each bone section and skeleton point and result are included in real time in user interface
On.
2. according to claim 1 a kind of based on depth information and the posture correcting method of bone information, it is characterised in that
The step (1), comprises the following steps:
A, skeleton data and depth data that all objects in effective sighting distance are obtained using Kinect2.0 cameras, the bone
Data refer to coordinate of the skeleton point in bone space coordinates;The span of effective sighting distance is 0.8-3.0m;It is described
Skeleton point includes:Head, neck, right hand forefinger, hand thumb, the right hand centre of the palm, right finesse, right ancon, right shoulder, shoulder center,
Left shoulder, left ancon, left finesse, the left hand centre of the palm, left hand thumb, left index finger, backbone, Center of Coxa, right hip, left hip, right knee,
Left knee, right ankle, left ankle, right crus of diaphragm, 25 skeleton points of left foot;
B, the skeleton point for determining targeted customer, filter to the skeleton point of other users:Obtained using Kinect2.0 cameras
The depth value of each user's skeleton point, and added up, the average depth value of all skeleton points of each user is calculated, it is right
Than these average depth values, the minimum user of average depth value is targeted customer, preserves the skeleton point of the targeted customer, filters it
The skeleton point of his user.
3. according to claim 2 a kind of based on depth information and the posture correcting method of bone information, it is characterised in that
The step (2), including step is as follows:
C, the smooth value attribute of setting, the span of the smooth value is the real-coded GA between 0 to 1, and smooth value is bigger, is put down
Sliding is more, and smooth value is that 0 expression is not smoothed;
Amendment value attribute is set, and the span of the correction value is the real-coded GA between 0 to 1, and correction value is less, bone
Information is more smooth;
Shake radius attribute is set, and the span of the shake radius is the real-coded GA between 0 to 1, when skeleton point is trembled
It is dynamic to have exceeded the shake radius for arranging, it will to be corrected within this shake radius;
The maximum boundary attribute of shake radius is set, and span is the real-coded GA between 0 to 1, it is any more than shake half
All without being considered to shake what is produced, and it is a new point to be identified as to the point of the maximum boundary in footpath;
Prediction frame size property is set, and span is the real-coded GA between 0 to 1, and default value is 0;
D, call kinect skeleton datas smoothing algorithm i.e. to each in transmission step C in SkeletonStream.Enable ()
Individual smoothing parameter is smoothed to skeleton data.
4. according to claim 2 a kind of based on depth information and the posture correcting method of bone information, it is characterised in that
The step (3), including step is as follows:
E, according to human structurology principle, for 25 skeleton points that step (2) is extracted, successively by two adjacent skeleton points
Connection one bone section of composition, and this bone section is defined as into a bone vector, 24 bone vectors are obtained, i.e.,:
F, set arbitrary bone vectorBe coordinate in bone space coordinates be (x1,y1,z1) skeleton point it is empty with bone
Between coordinate in coordinate system be (x2,y2,z2) skeleton point connection gained,Bone vectorRepresent
As shown in formula (I):
G, set bone vectorα, β and γ are respectively with the angle of bone space coordinates X-axis, Y-axis and Z axis, thus, are obtained
Bone vectorThree direction cosines values, shown in computing formula such as formula (II), formula (III) and formula (IV):
By above-mentioned algorithm, three direction cosines values of 24 bone vectors are calculated successively, and using these cosine values as feature
Extracted.
5. according to claim 4 a kind of based on depth information and the posture correcting method of bone information, it is characterised in that
The step (4), including step is as follows:
H, acquisition training dataset:Targeted customer completes a series of standard operation under the guidance of professional, by step
(1) to step (3), Kinect2.0 cameras obtain three direction cosines values of the corresponding 24 bones vector of each action,
That is training dataset;
I, acquisition test data set:Targeted customer completes alone a series of actions in the effective sighting distance of Kinect2.0 cameras, leads to
Step (1) is crossed to step (3), Kinect2.0 cameras obtain three directions of the corresponding 24 bones vector of each action
Cosine value, i.e. test data set.
6. according to claim 5 a kind of based on depth information and the posture correcting method of bone information, it is characterised in that
The step (5), including step is as follows:
G, the BP neural network number of plies is determined for 3, the nodes for determining input layer are 3, and the nodes for determining output layer are 1, and are led to
The nodes m that formula (V) determines hidden layer is crossed, formula (V) is as follows:
In formula (V), n is the nodes of input layer;L is the nodes of output layer;α is the integer between 1-10;
K, Bayesian Regulated Neural Networks are input to using training dataset as training set, to each bone vector training
Study, in the correcting posture stage, test data set is input to and is trained in the Bayesian Regulated Neural Networks for finishing, right
The correctness of each bone vector is identified, i.e., to the corresponding bone Vectors matching of the standard bone posture of training set with
It is no.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611251820.3A CN106650687B (en) | 2016-12-30 | 2016-12-30 | Posture correction method based on depth information and skeleton information |
PCT/CN2017/104990 WO2018120964A1 (en) | 2016-12-30 | 2017-09-30 | Posture correction method based on depth information and skeleton information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611251820.3A CN106650687B (en) | 2016-12-30 | 2016-12-30 | Posture correction method based on depth information and skeleton information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106650687A true CN106650687A (en) | 2017-05-10 |
CN106650687B CN106650687B (en) | 2020-05-19 |
Family
ID=58836708
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611251820.3A Active CN106650687B (en) | 2016-12-30 | 2016-12-30 | Posture correction method based on depth information and skeleton information |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106650687B (en) |
WO (1) | WO2018120964A1 (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106981075A (en) * | 2017-05-31 | 2017-07-25 | 江西制造职业技术学院 | The skeleton point parameter acquisition devices of apery motion mimicry and its recognition methods |
CN107220608A (en) * | 2017-05-22 | 2017-09-29 | 华南理工大学 | What a kind of basketball action model was rebuild and defended instructs system and method |
CN107308638A (en) * | 2017-06-06 | 2017-11-03 | 中国地质大学(武汉) | A kind of entertaining rehabilitation training of upper limbs system and method for virtual reality interaction |
CN107481280A (en) * | 2017-08-16 | 2017-12-15 | 北京优时尚科技有限责任公司 | The antidote and computing device of a kind of skeleton point |
CN107520843A (en) * | 2017-08-22 | 2017-12-29 | 南京野兽达达网络科技有限公司 | The action training method of one species people's multi-freedom robot |
WO2018120964A1 (en) * | 2016-12-30 | 2018-07-05 | 山东大学 | Posture correction method based on depth information and skeleton information |
CN108536292A (en) * | 2018-03-29 | 2018-09-14 | 深圳市芯汉感知技术有限公司 | A kind of data filtering methods and bone point coordinates accurate positioning method |
CN108720841A (en) * | 2018-05-22 | 2018-11-02 | 上海交通大学 | Wearable lower extremity movement correction system based on cloud detection |
CN108919943A (en) * | 2018-05-22 | 2018-11-30 | 南京邮电大学 | A kind of real-time hand method for tracing based on depth transducer |
CN109284696A (en) * | 2018-09-03 | 2019-01-29 | 吴佳雨 | A kind of image makings method for improving based on intelligent data acquisition Yu cloud service technology |
CN109589563A (en) * | 2018-12-29 | 2019-04-09 | 南京华捷艾米软件科技有限公司 | A kind of auxiliary method and system of dancing posture religion based on 3D body-sensing camera |
CN109758745A (en) * | 2018-09-30 | 2019-05-17 | 何家淳 | Artificial intelligence basketball training system based on Python/Java |
CN109948579A (en) * | 2019-03-28 | 2019-06-28 | 广州凡拓数字创意科技股份有限公司 | A kind of human body limb language identification method and system |
CN110032958A (en) * | 2019-03-28 | 2019-07-19 | 广州凡拓数字创意科技股份有限公司 | A kind of human body limb language identification method and system |
CN110263720A (en) * | 2019-06-21 | 2019-09-20 | 中国民航大学 | Action identification method based on depth image and bone information |
CN110472481A (en) * | 2019-07-01 | 2019-11-19 | 华南师范大学 | A kind of sleeping position detection method, device and equipment |
CN110490168A (en) * | 2019-08-26 | 2019-11-22 | 杭州视在科技有限公司 | Meet machine human behavior monitoring method in airport based on target detection and skeleton point algorithm |
CN110584911A (en) * | 2019-09-20 | 2019-12-20 | 长春理工大学 | Intelligent nursing bed based on prone position recognition |
CN110751100A (en) * | 2019-10-22 | 2020-02-04 | 北京理工大学 | Auxiliary training method and system for stadium |
CN110969114A (en) * | 2019-11-28 | 2020-04-07 | 四川省骨科医院 | Human body action function detection system, detection method and detector |
CN110991292A (en) * | 2019-11-26 | 2020-04-10 | 爱菲力斯(深圳)科技有限公司 | Action identification comparison method and system, computer storage medium and electronic device |
CN111353345A (en) * | 2018-12-21 | 2020-06-30 | 上海形趣信息科技有限公司 | Method, device and system for providing training feedback, electronic equipment and storage medium |
CN111353347A (en) * | 2018-12-21 | 2020-06-30 | 上海形趣信息科技有限公司 | Motion recognition error correction method, electronic device, and storage medium |
CN111382596A (en) * | 2018-12-27 | 2020-07-07 | 鸿富锦精密工业(武汉)有限公司 | Face recognition method and device and computer storage medium |
CN111539337A (en) * | 2020-04-26 | 2020-08-14 | 上海眼控科技股份有限公司 | Vehicle posture correction method, device and equipment |
CN111639612A (en) * | 2020-06-04 | 2020-09-08 | 浙江商汤科技开发有限公司 | Posture correction method and device, electronic equipment and storage medium |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110991161B (en) * | 2018-09-30 | 2023-04-18 | 北京国双科技有限公司 | Similar text determination method, neural network model obtaining method and related device |
CN109815907B (en) * | 2019-01-25 | 2023-04-07 | 深圳市象形字科技股份有限公司 | Sit-up posture detection and guidance method based on computer vision technology |
CN110083239B (en) * | 2019-04-19 | 2022-02-22 | 南京邮电大学 | Bone shake detection method based on dynamic weighting and grey prediction |
CN110334609B (en) * | 2019-06-14 | 2023-09-26 | 斯坦福启天联合(广州)研究院有限公司 | Intelligent real-time somatosensory capturing method |
CN110796699B (en) * | 2019-06-18 | 2024-03-01 | 叠境数字科技(上海)有限公司 | Optimal view angle selection method and three-dimensional human skeleton detection method for multi-view camera system |
CN110507986B (en) * | 2019-08-30 | 2023-08-22 | 网易(杭州)网络有限公司 | Animation information processing method and device |
CN110728220A (en) * | 2019-09-30 | 2020-01-24 | 上海大学 | Gymnastics auxiliary training method based on human body action skeleton information |
CN111046749B (en) * | 2019-11-25 | 2023-05-23 | 西安建筑科技大学 | Human body falling behavior detection method based on depth data |
CN112950751A (en) * | 2019-12-11 | 2021-06-11 | 阿里巴巴集团控股有限公司 | Gesture action display method and device, storage medium and system |
CN111402290B (en) * | 2020-02-29 | 2023-09-12 | 华为技术有限公司 | Action restoration method and device based on skeleton key points |
CN111652076A (en) * | 2020-05-11 | 2020-09-11 | 重庆大学 | Automatic gesture recognition system for AD (analog-digital) scale comprehension capability test |
CN111617464B (en) * | 2020-05-28 | 2023-02-24 | 西安工业大学 | Treadmill body-building method with action recognition function |
CN111680613B (en) * | 2020-06-03 | 2023-04-14 | 安徽大学 | Method for detecting falling behavior of escalator passengers in real time |
CN111860274B (en) * | 2020-07-14 | 2023-04-07 | 清华大学 | Traffic police command gesture recognition method based on head orientation and upper half skeleton characteristics |
CN111950392B (en) * | 2020-07-23 | 2022-08-05 | 华中科技大学 | Human body sitting posture identification method based on depth camera Kinect |
CN112149962B (en) * | 2020-08-28 | 2023-08-22 | 中国地质大学(武汉) | Risk quantitative assessment method and system for construction accident cause behaviors |
CN112149531B (en) * | 2020-09-09 | 2022-07-08 | 武汉科技大学 | Human skeleton data modeling method in behavior recognition |
CN112446433A (en) * | 2020-11-30 | 2021-03-05 | 北京数码视讯技术有限公司 | Method and device for determining accuracy of training posture and electronic equipment |
CN112494034B (en) * | 2020-11-30 | 2023-01-17 | 重庆优乃特医疗器械有限责任公司 | Data processing and analyzing system and method based on 3D posture detection and analysis |
CN112434639A (en) * | 2020-12-03 | 2021-03-02 | 郑州捷安高科股份有限公司 | Action matching method, device, equipment and storage medium |
CN112641441B (en) * | 2020-12-18 | 2024-01-02 | 河南翔宇医疗设备股份有限公司 | Posture evaluation method, system, device and computer readable storage medium |
CN112749671A (en) * | 2021-01-19 | 2021-05-04 | 澜途集思生态科技集团有限公司 | Human behavior recognition method based on video |
CN112966370B (en) * | 2021-02-09 | 2022-04-19 | 武汉纺织大学 | Design method of human body lower limb muscle training system based on Kinect |
CN112906604B (en) * | 2021-03-03 | 2024-02-20 | 安徽省科亿信息科技有限公司 | Behavior recognition method, device and system based on skeleton and RGB frame fusion |
CN113486757B (en) * | 2021-06-29 | 2022-04-05 | 北京科技大学 | Multi-person linear running test timing method based on human skeleton key point detection |
CN116030137A (en) * | 2021-10-27 | 2023-04-28 | 华为技术有限公司 | Parameter determination method and related equipment |
CN114360060B (en) * | 2021-12-31 | 2024-04-09 | 北京航空航天大学杭州创新研究院 | Human body action recognition and counting method |
CN115497596B (en) * | 2022-11-18 | 2023-04-07 | 深圳聚邦云天科技有限公司 | Human body motion process posture correction method and system based on Internet of things |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102824176A (en) * | 2012-09-24 | 2012-12-19 | 南通大学 | Upper limb joint movement degree measuring method based on Kinect sensor |
CN103230664A (en) * | 2013-04-17 | 2013-08-07 | 南通大学 | Upper limb movement rehabilitation training system and method based on Kinect sensor |
CN103246891A (en) * | 2013-05-28 | 2013-08-14 | 重庆邮电大学 | Chinese sign language recognition method based on kinect |
CN103473562A (en) * | 2013-09-18 | 2013-12-25 | 柳州市博源环科科技有限公司 | Automatic training and identifying system for specific human body action |
CN103489000A (en) * | 2013-09-18 | 2014-01-01 | 柳州市博源环科科技有限公司 | Achieving method of human movement recognition training system |
CN104200491A (en) * | 2014-08-15 | 2014-12-10 | 浙江省新华医院 | Motion posture correcting system for human body |
CN104517097A (en) * | 2014-09-24 | 2015-04-15 | 浙江大学 | Kinect-based moving human body posture recognition method |
CN104524742A (en) * | 2015-01-05 | 2015-04-22 | 河海大学常州校区 | Cerebral palsy child rehabilitation training method based on Kinect sensor |
CN104722056A (en) * | 2015-02-05 | 2015-06-24 | 北京市计算中心 | Rehabilitation training system and method using virtual reality technology |
CN105005769A (en) * | 2015-07-08 | 2015-10-28 | 山东大学 | Deep information based sign language recognition method |
CN105807926A (en) * | 2016-03-08 | 2016-07-27 | 中山大学 | Unmanned aerial vehicle man-machine interaction method based on three-dimensional continuous gesture recognition |
CN106022213A (en) * | 2016-05-04 | 2016-10-12 | 北方工业大学 | Human body motion recognition method based on three-dimensional bone information |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6788809B1 (en) * | 2000-06-30 | 2004-09-07 | Intel Corporation | System and method for gesture recognition in three dimensions using stereo imaging and color vision |
CN104484574A (en) * | 2014-12-25 | 2015-04-01 | 东华大学 | Real-time human body gesture supervised training correction system based on quaternion |
CN105307017A (en) * | 2015-11-03 | 2016-02-03 | Tcl集团股份有限公司 | Method and device for correcting posture of smart television user |
CN106650687B (en) * | 2016-12-30 | 2020-05-19 | 山东大学 | Posture correction method based on depth information and skeleton information |
-
2016
- 2016-12-30 CN CN201611251820.3A patent/CN106650687B/en active Active
-
2017
- 2017-09-30 WO PCT/CN2017/104990 patent/WO2018120964A1/en active Application Filing
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102824176A (en) * | 2012-09-24 | 2012-12-19 | 南通大学 | Upper limb joint movement degree measuring method based on Kinect sensor |
CN103230664A (en) * | 2013-04-17 | 2013-08-07 | 南通大学 | Upper limb movement rehabilitation training system and method based on Kinect sensor |
CN103246891A (en) * | 2013-05-28 | 2013-08-14 | 重庆邮电大学 | Chinese sign language recognition method based on kinect |
CN103473562A (en) * | 2013-09-18 | 2013-12-25 | 柳州市博源环科科技有限公司 | Automatic training and identifying system for specific human body action |
CN103489000A (en) * | 2013-09-18 | 2014-01-01 | 柳州市博源环科科技有限公司 | Achieving method of human movement recognition training system |
CN104200491A (en) * | 2014-08-15 | 2014-12-10 | 浙江省新华医院 | Motion posture correcting system for human body |
CN104517097A (en) * | 2014-09-24 | 2015-04-15 | 浙江大学 | Kinect-based moving human body posture recognition method |
CN104524742A (en) * | 2015-01-05 | 2015-04-22 | 河海大学常州校区 | Cerebral palsy child rehabilitation training method based on Kinect sensor |
CN104722056A (en) * | 2015-02-05 | 2015-06-24 | 北京市计算中心 | Rehabilitation training system and method using virtual reality technology |
CN105005769A (en) * | 2015-07-08 | 2015-10-28 | 山东大学 | Deep information based sign language recognition method |
CN105807926A (en) * | 2016-03-08 | 2016-07-27 | 中山大学 | Unmanned aerial vehicle man-machine interaction method based on three-dimensional continuous gesture recognition |
CN106022213A (en) * | 2016-05-04 | 2016-10-12 | 北方工业大学 | Human body motion recognition method based on three-dimensional bone information |
Non-Patent Citations (1)
Title |
---|
朱国刚等: "《基于Kinect传感器骨骼信息的人体动作识别》", 《计算机仿真》 * |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018120964A1 (en) * | 2016-12-30 | 2018-07-05 | 山东大学 | Posture correction method based on depth information and skeleton information |
CN107220608A (en) * | 2017-05-22 | 2017-09-29 | 华南理工大学 | What a kind of basketball action model was rebuild and defended instructs system and method |
CN106981075A (en) * | 2017-05-31 | 2017-07-25 | 江西制造职业技术学院 | The skeleton point parameter acquisition devices of apery motion mimicry and its recognition methods |
CN107308638B (en) * | 2017-06-06 | 2019-09-17 | 中国地质大学(武汉) | A kind of entertaining rehabilitation training of upper limbs system and method for virtual reality interaction |
CN107308638A (en) * | 2017-06-06 | 2017-11-03 | 中国地质大学(武汉) | A kind of entertaining rehabilitation training of upper limbs system and method for virtual reality interaction |
CN107481280A (en) * | 2017-08-16 | 2017-12-15 | 北京优时尚科技有限责任公司 | The antidote and computing device of a kind of skeleton point |
CN107520843A (en) * | 2017-08-22 | 2017-12-29 | 南京野兽达达网络科技有限公司 | The action training method of one species people's multi-freedom robot |
CN108536292A (en) * | 2018-03-29 | 2018-09-14 | 深圳市芯汉感知技术有限公司 | A kind of data filtering methods and bone point coordinates accurate positioning method |
CN108720841A (en) * | 2018-05-22 | 2018-11-02 | 上海交通大学 | Wearable lower extremity movement correction system based on cloud detection |
CN108919943A (en) * | 2018-05-22 | 2018-11-30 | 南京邮电大学 | A kind of real-time hand method for tracing based on depth transducer |
CN109284696A (en) * | 2018-09-03 | 2019-01-29 | 吴佳雨 | A kind of image makings method for improving based on intelligent data acquisition Yu cloud service technology |
CN109758745A (en) * | 2018-09-30 | 2019-05-17 | 何家淳 | Artificial intelligence basketball training system based on Python/Java |
CN111353347A (en) * | 2018-12-21 | 2020-06-30 | 上海形趣信息科技有限公司 | Motion recognition error correction method, electronic device, and storage medium |
CN111353345B (en) * | 2018-12-21 | 2024-04-16 | 上海史贝斯健身管理有限公司 | Method, apparatus, system, electronic device, and storage medium for providing training feedback |
CN111353345A (en) * | 2018-12-21 | 2020-06-30 | 上海形趣信息科技有限公司 | Method, device and system for providing training feedback, electronic equipment and storage medium |
CN111382596A (en) * | 2018-12-27 | 2020-07-07 | 鸿富锦精密工业(武汉)有限公司 | Face recognition method and device and computer storage medium |
CN109589563A (en) * | 2018-12-29 | 2019-04-09 | 南京华捷艾米软件科技有限公司 | A kind of auxiliary method and system of dancing posture religion based on 3D body-sensing camera |
CN109948579A (en) * | 2019-03-28 | 2019-06-28 | 广州凡拓数字创意科技股份有限公司 | A kind of human body limb language identification method and system |
CN110032958A (en) * | 2019-03-28 | 2019-07-19 | 广州凡拓数字创意科技股份有限公司 | A kind of human body limb language identification method and system |
CN110032958B (en) * | 2019-03-28 | 2020-01-24 | 广州凡拓数字创意科技股份有限公司 | Human body limb language identification method and system |
CN110263720A (en) * | 2019-06-21 | 2019-09-20 | 中国民航大学 | Action identification method based on depth image and bone information |
CN110263720B (en) * | 2019-06-21 | 2022-12-27 | 中国民航大学 | Action recognition method based on depth image and skeleton information |
CN110472481A (en) * | 2019-07-01 | 2019-11-19 | 华南师范大学 | A kind of sleeping position detection method, device and equipment |
CN110472481B (en) * | 2019-07-01 | 2024-01-05 | 华南师范大学 | Sleeping gesture detection method, device and equipment |
CN110490168A (en) * | 2019-08-26 | 2019-11-22 | 杭州视在科技有限公司 | Meet machine human behavior monitoring method in airport based on target detection and skeleton point algorithm |
CN110584911A (en) * | 2019-09-20 | 2019-12-20 | 长春理工大学 | Intelligent nursing bed based on prone position recognition |
CN110751100A (en) * | 2019-10-22 | 2020-02-04 | 北京理工大学 | Auxiliary training method and system for stadium |
CN110991292A (en) * | 2019-11-26 | 2020-04-10 | 爱菲力斯(深圳)科技有限公司 | Action identification comparison method and system, computer storage medium and electronic device |
CN110969114A (en) * | 2019-11-28 | 2020-04-07 | 四川省骨科医院 | Human body action function detection system, detection method and detector |
CN110969114B (en) * | 2019-11-28 | 2023-06-09 | 四川省骨科医院 | Human body action function detection system, detection method and detector |
CN111539337A (en) * | 2020-04-26 | 2020-08-14 | 上海眼控科技股份有限公司 | Vehicle posture correction method, device and equipment |
CN111639612A (en) * | 2020-06-04 | 2020-09-08 | 浙江商汤科技开发有限公司 | Posture correction method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
CN106650687B (en) | 2020-05-19 |
WO2018120964A1 (en) | 2018-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106650687A (en) | Posture correction method based on depth information and skeleton information | |
CN102184541B (en) | Multi-objective optimized human body motion tracking method | |
CN107808143A (en) | Dynamic gesture identification method based on computer vision | |
CN109597485B (en) | Gesture interaction system based on double-fingered-area features and working method thereof | |
CN104035557B (en) | Kinect action identification method based on joint activeness | |
CN107243141A (en) | A kind of action auxiliary training system based on motion identification | |
CN107754225A (en) | A kind of intelligent body-building coaching system | |
CN107423730A (en) | A kind of body gait behavior active detecting identifying system and method folded based on semanteme | |
CN110490109A (en) | A kind of online human body recovery action identification method based on monocular vision | |
CN114998983A (en) | Limb rehabilitation method based on augmented reality technology and posture recognition technology | |
CN110991268A (en) | Depth image-based Parkinson hand motion quantization analysis method and system | |
CN111383735A (en) | Unmanned body-building analysis method based on artificial intelligence | |
CN114550027A (en) | Vision-based motion video fine analysis method and device | |
CN113705540A (en) | Method and system for recognizing and counting non-instrument training actions | |
CN114612511A (en) | Exercise training assistant decision support system based on improved domain confrontation neural network algorithm | |
Clouthier et al. | Development and validation of a deep learning algorithm and open-source platform for the automatic labelling of motion capture markers | |
Kanase et al. | Pose estimation and correcting exercise posture | |
CN114550299A (en) | System and method for evaluating daily life activity ability of old people based on video | |
Pang et al. | Dance video motion recognition based on computer vision and image processing | |
CN112183315A (en) | Motion recognition model training method and motion recognition method and device | |
Li et al. | Intelligent correction method of shooting action based on computer vision | |
CN115530814A (en) | Child motion rehabilitation training method based on visual posture detection and computer deep learning | |
CN115006822A (en) | Intelligent fitness mirror control system | |
Nguyen et al. | Vision-Based Global Localization of Points of Gaze in Sport Climbing | |
Murthy et al. | DiveNet: Dive Action Localization and Physical Pose Parameter Extraction for High Performance Training |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |