CN106650687B - Posture correction method based on depth information and skeleton information - Google Patents
Posture correction method based on depth information and skeleton information Download PDFInfo
- Publication number
- CN106650687B CN106650687B CN201611251820.3A CN201611251820A CN106650687B CN 106650687 B CN106650687 B CN 106650687B CN 201611251820 A CN201611251820 A CN 201611251820A CN 106650687 B CN106650687 B CN 106650687B
- Authority
- CN
- China
- Prior art keywords
- bone
- skeleton
- points
- user
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Data Mining & Analysis (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a posture correction method based on depth information and skeleton information, which comprises the following steps: (1) effective bone point screening based on the user ID and the depth value of the bone space coordinate; (2) carrying out smoothing treatment on the bone information, and standardizing the coordinates of the bone points in a bone space coordinate system; (3) drawing skeleton vectors, and calculating a direction cosine value of each skeleton vector; (4) acquiring a training data set and a test data set; (5) the data set is used as the input of a Bayesian regularization BP neural network, and 25 bone points and 24 bone segments are identified through the Bayesian regularization BP neural network; (6) and analyzing and processing results. The invention overcomes the defect that the color camera is easily interfered by external factors such as light rays and the like, can accurately track the human body in the visual field range, and simultaneously improves the naturalness of human-computer interaction.
Description
Technical Field
The invention relates to a posture correction method based on depth information and skeleton information, and belongs to the technical field of intelligent sensing and intelligent computing.
Background
In daily life, posture correction is applied to various fields, for example, medical fields, and can be used for rehabilitation treatment of dyskinesia diseases such as Parkinson's syndrome and muscle spasm; in the aspect of education, the teaching aid can be used for teaching of basketball, art gymnastics, dancing and other projects; in the aspect of body building and entertainment, the fitness equipment can be used for posture correction of yoga, pilates and other projects and helps a body builder to achieve expected fitness effect. It is reported that 1 in 3 of asians over 40 years old may suffer from dyskinesia Syndrome (locomative Syndrome), reasonable exercise therapy is an indispensable rehabilitation means for such diseases, while for gymnastics, yoga, practicalities and other items, the quality of the finished action posture affects the effect of later training, and wrong action posture may cause bone dislocation or muscle strain. Therefore, it is very important to research an intelligent and efficient posture monitoring and correcting method in order to achieve better medical rehabilitation effect and reduce unnecessary physical injuries in training.
In the medical aspect, the traditional posture correction method needs the accompany and guidance of professionals, wastes a lot of manpower and material resources, makes participants feel boring, and often cannot achieve the expected rehabilitation curative effect. In addition, some research institutions apply the Sony motion sensing equipment EyeToy and Nintendo Wii to limb movement rehabilitation training, but the development of the technology in the rehabilitation field is restricted by the limitation of two-dimensional image processing.
The Microsoft-published Kinect somatosensory device has the functions of instant 3D motion capture, microphone input, audio-video identification and the like, particularly, a second-generation Kinect body sensor provides an optimized skeleton tracking function, a depth camera with enhanced fidelity is combined with improved software, a series of skeleton tracking functions are improved, and besides that 6 sets of complete skeletons (the first-generation Kinect can track 2 sets at most) and 25 skeleton points (the first-generation Kinect is 20) of each target can be tracked at present, automatic tracking and positioning are more accurate and stable than those of the previous generation, and the tracking range is larger.
Disclosure of Invention
Aiming at the defects of the existing method, the invention provides a posture correction method based on depth information and skeleton information.
The invention utilizes the advantages of the Kinect camera for acquiring the depth data and the skeleton data of the user and aims to develop a posture correction method which is strong in anti-interference capability, convenient, practical and effective. The method comprises the following steps: 1) data acquisition: using Kinect2.0 to acquire depth data and bone data within the field of view of the sensor (25 bone points on 6 persons can be acquired), performing bone point screening based on a user ID (a bone tracking ID unique to Kinect for assigning to each user in the field of view to distinguish which user this bone data is now) and a bone space coordinate depth value; 2) data preprocessing: carrying out smoothing treatment on the bone data to standardize the coordinates of the bone points and drawing a bone vector according to a human body structure principle; 3) feature extraction and identification: calculating the direction cosine values of the skeleton vectors, extracting 3 direction cosine values of each skeleton vector as features, inputting the features into a Bayes regularization BP neural network for further recognition, finally analyzing and processing a recognition result, displaying the processing result on a user interface, displaying the standard skeleton segments and skeleton points in bright green, and displaying the non-standard skeleton segments and skeleton points in bright red and simultaneously prompting by using voice.
The invention improves the practicability, accuracy and robustness of posture correction.
The technical scheme of the invention is as follows:
interpretation of terms:
1. the BP (Back Propagation) neural network is proposed by a group of scientists including Rumelhart and McCelland in 1986, is a multi-layer feedforward network trained according to an error inverse Propagation algorithm, and is one of the most widely applied neural network models at present. The BP neural network is able to learn and store a large number of input-output pattern mappings without prior disclosure of mathematical equations describing such mappings. The learning rule is that the steepest descent method is used, and the weight and the threshold value of the network are continuously adjusted through back propagation, so that the error square sum of the network is minimum. The BP neural network model topology includes an input layer (input layer), a hidden layer (hidden layer) and an output layer (output layer).
2. The depth data stream refers to a series of depth data acquired by kinect in real time;
3. the color data stream refers to a series of color data acquired by kinect in real time;
a posture correction method based on depth information and skeleton information comprises the following specific steps:
(1) effective bone point screening based on user ID and depth values of bone space coordinates: acquiring user bone data and depth data by using a Kinect2.0 camera, and selecting all bone points of a target user; the user ID refers to a unique user tracking identifier of the Kinect2.0 sensing device and is used for being distributed to each user in an effective visual range so as to distinguish which user the skeleton information is;
(2) smoothing the bone data selected in the step (1), and standardizing coordinates of bone points in a bone space coordinate system; reducing the difference in skeletal point locations between skeletal frames, the data object type of kinect development tool kinect for windows sdk is provided in the form of skeletal frames, each frame comprising 25 skeletal points.
(3) Drawing skeleton vectors according to the coordinates of the skeleton points in the skeleton space coordinate system, which realize the skeleton space coordinate standardization in the step (2), calculating the direction cosine value of each skeleton vector, and extracting the direction cosine values as features;
(4) acquiring a training data set and a test data set; the data sets are all direction cosine values of user skeleton vectors, each frame of skeleton frame comprises 24 skeleton vectors, and each skeleton vector comprises three direction cosine values;
(5) the training data set and the testing data set obtained in the step (4) are used as the input of a Bayesian regularization BP neural network, and 25 bone points and 24 bone segments are identified through the Bayesian regularization BP neural network;
(6) and analyzing and processing results, and displaying the analysis and processing results of each bone segment and each bone point on a user interface in real time. The bone data frames of the target user in the effective visual range captured by the Kinect2.0 camera can be presented on the user interface in real time, the bone segments and the bone points corresponding to the bone vectors matched with the standard postures can be presented in bright green, the bone segments and the bone points corresponding to the bone vectors with the wrong postures can be presented in bright red, and meanwhile, voice prompt is accompanied.
Preferably, the step (1) comprises the following steps:
A. acquiring skeleton data and depth data of all objects in an effective visual range by using a Kinect2.0 camera, wherein the skeleton data refers to coordinates of skeleton points in a skeleton space coordinate system; the bone space coordinate system refers to: taking the Kinect2.0 camera as an origin, enabling the Z axis to be consistent with the orientation of the Kinect2.0 camera, enabling the Y axis positive half axis to extend upwards, and enabling the X axis positive half axis to extend towards the visual angle of the Kinect sensor; the effective visual range refers to the visual range in which a Kinect2.0 camera can correctly collect information, and the value range of the effective visual range is 0.8-3.0 m; the skeletal points include: 25 skeletal points of the head, the neck, the index finger of the right hand, the thumb of the right hand, the palm of the right hand, the right wrist, the right elbow, the right shoulder, the center of the shoulder, the left elbow, the left wrist, the palm of the left hand, the thumb of the left hand, the index finger of the left hand, the spine, the center of the hip, the right hip, the left hip, the right knee, the left knee, the right ankle, the left ankle, the right foot and the left foot;
B. determining the bone points of the target user, and filtering the bone points of other users: and acquiring the depth value of the bone point of each user by using a Kinect2.0 camera, accumulating, calculating the average depth value of all the bone points of each user, comparing the average depth values, storing the bone point of the target user, and filtering the bone points of other users, wherein the user with the minimum average depth value is the target user. There are still a plurality of users in the effective visual range, and a plurality of ineffective bone points will occur to influence the accuracy of the later characteristic value extraction, so that the bone points of the target human body need to be determined and other ineffective bone points need to be filtered.
Kinect2.0 can simultaneously track 25 skeletal points on 6 target users and the depth data stream and color data stream of objects in the visual field, the skeleton API in Kinect for windows sdk can provide position information of up to 5 persons in front of Kinect, including detailed posture and three-dimensional coordinate information of skeletal points and user ID information, the data object type is provided in the form of skeleton frames, and each frame can store 25 skeletal points.
According to the invention, said step (2) comprises the steps of:
C. setting a Smoothing value (Smoothing) attribute, wherein the value range of the Smoothing value is floating point type data between 0 and 1, the larger the Smoothing value is, the more Smoothing is, and the Smoothing value of 0 represents that Smoothing is not performed;
setting a Correction value (Correction) attribute, wherein the value range of the Correction value is floating point type data between 0 and 1, and the smaller the Correction value is, the smoother the skeleton information is;
setting a jitter radius (jitter radius) attribute, wherein the value range of the jitter radius is floating point type data between 0 and 1, and when the bone point jitter exceeds the set jitter radius, the bone point jitter is corrected to be within the jitter radius;
setting a maximum boundary (MaxDeviationradius) attribute of a jitter radius, wherein the value range of the maximum boundary (MaxDeviationradius) attribute is floating point type data between 0 and 1, and any point exceeding the maximum boundary of the jitter radius cannot be considered as a jitter and is considered as a new point;
setting a Prediction frame size (Prediction) attribute, wherein the value range of the Prediction frame size (Prediction) attribute is floating point type data between 0 and 1, and the default value is 0;
D. and C, calling a kinect bone data smoothing algorithm, namely transferring each smoothing parameter in the step C into the SkeletonStream.
Smoothing the skeletal data creates a performance overhead. The more smoothing, the greater the performance consumption. No experience in setting the respective smoothing parameters can be followed. Continuous testing and debugging are required to achieve the best performance and effect. Different smoothing parameters may need to be set at different stages of program execution.
In the tracking of bone points, there are situations where the bone motion exhibits a sudden change. For example, the actions of the target user are not consistent enough, the Kinect hardware performance is poor, and the like. The relative positions of the skeletal points may vary widely from frame to frame, which may have some negative impact on the application, for example, may affect the user experience and cause an accident to the control, etc. The position difference of the skeleton points between frames is reduced by smoothing the skeleton data and standardizing the coordinates of the skeleton points.
Preferably, the step (3) comprises the following steps:
E. according to the human structural principle, aiming at the 25 bone points extracted in the step (2), sequentially connecting two adjacent bone points to form a bone segment, defining the bone segment as a bone vector, and obtaining 24 bone vectors in total, namely:
F. let any skeleton vectorIs the coordinate in the skeleton space coordinate system as (x)1,y1,z1) Has a coordinate of (x) with the bone point in the bone space coordinate system2,y2,z2) The result of the connection of the bone points of (a),skeletal vectorRepresented by the formula (I):
G. let the skeleton vectorthe included angles with the X axis, the Y axis and the Z axis of the skeleton space coordinate system are α, beta and gamma, so that the skeleton vector can be obtainedThe formula (II), formula (III) and formula (IV) are shown as follows:
through the algorithm, three direction cosine values of 24 skeleton vectors are sequentially calculated, and the cosine values are extracted as features.
When the user makes different gestures, there is different position and angle information for each bone segment of the human body, so that a certain kind of motion can be characterized by the directional cosine values of the 24 defined bone vectors.
Preferably, the step (4) comprises the following steps:
H. acquiring a training data set: a target user completes a series of standard actions under the guidance of a professional, and through the steps (1) to (3), the Kinect2.0 camera acquires three directional cosine values of 24 skeleton vectors corresponding to each action, namely a training data set;
I. acquiring a test data set: the target user independently completes a series of actions in the Kinect2.0 camera effective visual range, and through the steps (1) to (3), the Kinect2.0 camera obtains three directional cosine values of 24 skeleton vectors corresponding to each action, namely a test data set.
In the earlier stage of posture correction, a user can complete a series of standard actions under the guidance of professionals, a Kinect2.0 camera can record the actions in the form of skeleton frames, then the actions are subjected to skeleton point screening, skeleton data smoothing processing and direction cosine characteristic value extraction of skeleton vectors, finally a training data set required by the user is generated, the training set comprises a plurality of data frames (the more the number of frames is, the higher the precision is), each frame of skeleton data comprises 24 skeleton vectors of a posture, and each skeleton vector has three direction cosine values; in the posture correction stage, a user can independently complete a series of actions in the Kinect visual field range, the Kinect can record the possibly irregular actions in the form of skeleton frames, then corresponding skeleton point screening, skeleton data smoothing processing and direction cosine characteristic value extraction of skeleton vectors are carried out, and finally a test data set required by the Kinect is generated.
Preferably, the step (5) comprises the following steps:
G. determining the number of layers of the BP neural network to be 3, determining the number of nodes of an input layer to be 3, determining the number of nodes of an output layer to be 1, and determining the number of nodes m of a hidden layer by using a formula (V), wherein the formula (V) is as follows:
in the formula (V), n is the number of nodes of the input layer, l is the number of nodes of the output layer, and α is an integer between 1 and 10;
through learning the BP neural network, the simple action recognition problem can be realized through the single hidden layer network, so the three-layer BP neural network is selected. The number of nodes of the input layer and the output layer is generally determined by the dimensions of the input variables and the output variables according to practical problems. The BP neural network input variable is a three-dimensional direction cosine characteristic value, so the number of nodes of an input layer is 3, and whether each skeleton vector of the output variable is matched with the skeleton vector of a standard posture is 1 or 0, so the number of nodes of the output layer is 1. The number of nodes of the hidden layer has great influence on the network performance, the number of nodes is different, the final result is greatly different, the number of nodes is too small, the iteration rate of the network is high, but the modeling of the network is insufficient, so that the network performance is poor; too many nodes lead to complex network structure, increased calculation amount and longer training time, so that proper node number needs to be selected.
K. And inputting the training data set serving as a training set into a Bayes regularization BP neural network, training and learning each bone vector, inputting the test data set into the trained Bayes regularization BP neural network in a posture correction stage, and identifying the correctness of each bone vector, namely whether the bone vector is matched with the corresponding bone vector of the standard bone posture of the training set.
The invention has the beneficial effects that:
1. unlike conventional color cameras, the Kinect sensor is capable of providing third-dimensional depth data. The human body tracking device can overcome the defect that a color camera is easily interfered by external factors such as light rays and the like, accurately tracks the human body in a visual field range, and simultaneously improves the naturalness of human-computer interaction.
2. The Kinect can extract human skeleton information, when a user makes different actions, corresponding skeleton points and skeleton segments have different position and angle information, and the information provides a very reliable and direct method for recognizing the human posture.
3. Kinect2.0 is a comprehensive upgrade to the Kinect sensor of the first generation, can track 6 complete skeletons (the Kinect of the first generation is tracked 2 sets at most) and 25 skeleton points (the Kinect of the first generation is 20) on each person in the aspect of skeleton tracking, and the automatic tracking positioning function is also more accurate more stable than the previous generation, and the tracking scope is also bigger, uses Kinect2.0 can improve the degree of accuracy of gesture recognition greatly.
4. The traditional feature extraction method needs to adopt a complex mathematical algorithm, is difficult to realize and has low operation efficiency, the method for directly researching the angle information of the human skeleton is more convenient and visual, and the operation speed of a program is improved.
5. The BP neural network has a plurality of defects, particularly the generalization ability reduction caused by the overfitting phenomenon is obvious, the Bayesian regularization algorithm can effectively inhibit the overfitting phenomenon, and the BP neural network has high generalization ability.
6. And the target user is positioned by using the user ID and the depth value of the skeleton space coordinate, so that the accuracy of skeleton tracking is improved.
7. The invention identifies and corrects the action postures of 25 skeletal points and 24 skeletal sections of the human body, and improves the correctness and the robustness of posture correction.
Drawings
FIG. 1 is a block flow diagram of a method for posture correction based on depth information and skeletal information according to the present invention;
FIG. 2 is a schematic view of the skeletal spot screening process of the present invention;
FIG. 3 is a schematic diagram of 24 skeletal vectors defined in the present invention;
FIG. 5 is a schematic view of a process for extracting direction cosine feature values according to the present invention.
Detailed Description
The invention is further defined in the following, but not limited to, the figures and examples in the description.
Example 1
A posture correction method based on depth information and bone information, as shown in fig. 1, includes the following specific steps:
(1) effective bone point screening based on user ID and depth values of bone space coordinates: acquiring user bone data and depth data by using a Kinect2.0 camera, and selecting all bone points of a target user; the user ID refers to a unique user tracking identifier of the Kinect2.0 sensing device and is used for being distributed to each user in an effective visual range so as to distinguish which user the skeleton information is; as shown in fig. 2, the method comprises the following steps:
A. acquiring skeleton data and depth data of all objects in an effective visual range by using a Kinect2.0 camera, wherein the skeleton data refers to coordinates of skeleton points in a skeleton space coordinate system; the bone space coordinate system refers to: taking the Kinect2.0 camera as an origin, enabling the Z axis to be consistent with the orientation of the Kinect2.0 camera, enabling the Y axis positive half axis to extend upwards, and enabling the X axis positive half axis to extend towards the visual angle of the Kinect sensor; the effective visual range refers to the visual range in which a Kinect2.0 camera can correctly collect information, and the value range of the effective visual range is 0.8-3.0 m; the skeletal points include: 25 skeletal points of the head, the neck, the index finger of the right hand, the thumb of the right hand, the palm of the right hand, the right wrist, the right elbow, the right shoulder, the center of the shoulder, the left elbow, the left wrist, the palm of the left hand, the thumb of the left hand, the index finger of the left hand, the spine, the center of the hip, the right hip, the left hip, the right knee, the left knee, the right ankle, the left ankle, the right foot, the left foot and the like;
B. determining the bone points of the target user, and filtering the bone points of other users: and acquiring the depth value of each user bone point by using a Kinect2.0 camera, accumulating, calculating the average depth value of all the bone points of each user, comparing the average depth values, storing the bone point of the target user, and filtering the bone points of other users, wherein the user with the minimum average depth value is the target user. There are still a plurality of users in the effective visual range, and a plurality of ineffective bone points will occur to influence the accuracy of the later characteristic value extraction, so that the bone points of the target human body need to be determined and other ineffective bone points need to be filtered.
Kinect2.0 can simultaneously track 25 skeletal points on 6 target users and the depth data stream and color data stream of objects in the visual field, the skeleton API in Kinect for windows sdk can provide position information of up to 5 persons in front of Kinect, including detailed posture and three-dimensional coordinate information of skeletal points and user ID information, the data object type is provided in the form of skeleton frames, and each frame can store 25 skeletal points.
(2) Smoothing the bone data selected in the step (1), and standardizing coordinates of bone points in a bone space coordinate system; reducing the difference in skeletal point locations between skeletal frames, the data object type of kinect development tool kinect for windows sdk is provided in the form of skeletal frames, each frame comprising 25 skeletal points. The method comprises the following steps:
C. setting a Smoothing value (Smoothing) attribute, wherein the value range of the Smoothing value is floating point type data between 0 and 1, the larger the Smoothing value is, the more Smoothing is, and the Smoothing value of 0 represents that Smoothing is not performed;
setting a Correction value (Correction) attribute, wherein the value range of the Correction value is floating point type data between 0 and 1, and the smaller the Correction value is, the smoother the skeleton information is;
setting a jitter radius (jitter radius) attribute, wherein the value range of the jitter radius is floating point type data between 0 and 1, and when the bone point jitter exceeds the set jitter radius, the bone point jitter is corrected to be within the jitter radius;
setting a maximum boundary (MaxDeviationradius) attribute of a jitter radius, wherein the value range of the maximum boundary (MaxDeviationradius) attribute is floating point type data between 0 and 1, and any point exceeding the maximum boundary of the jitter radius cannot be considered as a jitter and is considered as a new point;
setting a Prediction frame size (Prediction) attribute, wherein the value range of the Prediction frame size (Prediction) attribute is floating point type data between 0 and 1, and the default value is 0;
D. and C, calling a kinect bone data smoothing algorithm, namely transferring each smoothing parameter in the step C into the SkeletonStream.
Smoothing the skeletal data creates a performance overhead. The more smoothing, the greater the performance consumption. No experience in setting the respective smoothing parameters can be followed. Continuous testing and debugging are required to achieve the best performance and effect. Different smoothing parameters may need to be set at different stages of program execution.
In the tracking of bone points, there are situations where the bone motion exhibits a sudden change. For example, the actions of the target user are not consistent enough, the Kinect hardware performance is poor, and the like. The relative positions of the skeletal points may vary widely from frame to frame, which may have some negative impact on the application, for example, may affect the user experience and cause an accident to the control, etc. The position difference of the skeleton points between frames is reduced by smoothing the skeleton data and standardizing the coordinates of the skeleton points.
(3) Drawing skeleton vectors according to the coordinates of the skeleton points in the skeleton space coordinate system, which realize the skeleton space coordinate standardization in the step (2), calculating the direction cosine value of each skeleton vector, and extracting the direction cosine values as features; the method comprises the following steps:
E. according to the human structural principle, aiming at the 25 bone points extracted in the step (2), sequentially connecting two adjacent bone points to form a bone segment, defining the bone segment as a bone vector, and obtaining 24 bone vectors in total, namely:as shown in fig. 3;
F. let the bone vector from the shoulder joint to the elbow joint of the right hand beAs shown in FIG. 4, the three-dimensional coordinate of the shoulder joint is (x)1,y1,z1) The three-dimensional coordinate of the elbow joint is (x)2,y2,z2) Skeletal vectorRepresented by the formula (I):
G. let the skeleton vectorthe included angles with the X axis, the Y axis and the Z axis of the skeleton space coordinate system are α, beta and gamma, so that the skeleton vector can be obtainedThe formula (II), formula (III) and formula (IV) are shown as follows:
through the above algorithm, three directional cosine values of the 24 skeleton vectors are sequentially calculated, and these cosine values are extracted as features, as shown in fig. 5.
When the user makes different gestures, there is different position and angle information for each bone segment of the human body, so that a certain kind of motion can be characterized by the directional cosine values of the 24 defined bone vectors.
(4) Acquiring a training data set and a test data set; the data sets are all direction cosine values of user skeleton vectors, each frame skeleton frame comprises 24 groups of skeleton vectors, and each skeleton vector comprises three direction cosine values; the method comprises the following steps:
H. acquiring a training data set: a target user completes a series of standard actions under the guidance of a professional, and through the steps (1) to (3), the Kinect2.0 camera acquires three directional cosine values of 24 skeleton vectors corresponding to each action, namely a training data set;
I. acquiring a test data set: the target user independently completes a series of actions in the Kinect2.0 camera effective visual range, and through the steps (1) to (3), the Kinect2.0 camera obtains three directional cosine values of 24 skeleton vectors corresponding to each action, namely a test data set.
In the earlier stage of posture correction, a user can complete a series of standard actions under the guidance of professionals, a Kinect2.0 camera can record the actions in the form of skeleton frames, then the actions are subjected to skeleton point screening, skeleton data smoothing processing and direction cosine characteristic value extraction of skeleton vectors, finally a training data set required by the user is generated, the training set comprises a plurality of data frames (the more the number of frames is, the higher the precision is), each frame of skeleton data comprises 24 skeleton vectors of a posture, and each skeleton vector has three direction cosine values; in the posture correction stage, a user can independently complete a series of actions in the Kinect visual field range, the Kinect can record the possibly irregular actions in the form of skeleton frames, then corresponding skeleton point screening, skeleton data smoothing processing and direction cosine characteristic value extraction of skeleton vectors are carried out, and finally a test data set required by the Kinect is generated.
(5) The training data set and the testing data set obtained in the step (4) are used as the input of a Bayesian regularization BP neural network, and 25 bone points and 24 bone segments are identified through the Bayesian regularization BP neural network; the method comprises the following steps:
G. determining the number of layers of the BP neural network to be 3, determining the number of nodes of an input layer to be 3, determining the number of nodes of an output layer to be 1, and determining the number of nodes m of a hidden layer by using a formula (V), wherein the formula (V) is as follows:
in the formula (V), n is the number of nodes of the input layer, l is the number of nodes of the output layer, and α is an integer between 1 and 10;
through learning the BP neural network, the simple action recognition problem can be realized through the single hidden layer network, so the three-layer BP neural network is selected. The number of nodes of the input layer and the output layer is generally determined by the dimensions of the input variables and the output variables according to practical problems. The BP neural network input variable is a three-dimensional direction cosine characteristic value, so the number of nodes of an input layer is 3, and whether each skeleton vector of the output variable is matched with the skeleton vector of a standard posture is 1 or 0, so the number of nodes of the output layer is 1. The number of nodes of the hidden layer has great influence on the network performance, the number of nodes is different, the final result is greatly different, the number of nodes is too small, the iteration rate of the network is high, but the modeling of the network is insufficient, so that the network performance is poor; too many nodes lead to complex network structure, increased calculation amount and longer training time, so that proper node number needs to be selected.
K. And inputting the training data set serving as a training set into a Bayes regularization BP neural network, training and learning each bone vector, inputting the test data set into the trained Bayes regularization BP neural network in a posture correction stage, and identifying the correctness of each bone vector, namely whether the bone vector is matched with the corresponding bone vector of the standard bone posture of the training set.
(6) And analyzing and processing results, and displaying the analysis and processing results of each bone segment and each bone point on a user interface in real time. The bone data frames of the target user in the effective visual range captured by the Kinect2.0 camera can be presented on the user interface in real time, the bone segments and the bone points corresponding to the bone vectors matched with the standard postures can be presented in bright green, the bone segments and the bone points corresponding to the bone vectors with the wrong postures can be presented in bright red, and meanwhile, voice prompt is accompanied.
Claims (4)
1. A posture correction method based on depth information and skeleton information is characterized by comprising the following specific steps:
(1) effective bone point screening based on user ID and depth values of bone space coordinates: acquiring user bone data and depth data by using a Kinect2.0 camera, and selecting all bone points of a target user; the user ID refers to a unique user tracking identifier of the Kinect2.0 sensing device and is used for being distributed to each user in an effective visual range so as to distinguish which user the skeleton information is;
(2) smoothing the bone data selected in the step (1), and standardizing coordinates of bone points in a bone space coordinate system; the method comprises the following steps:
C. setting smooth value attributes, wherein the value range of the smooth value is floating point type data between 0 and 1, the larger the smooth value is, the more the smooth value is, and the smooth value is 0, which means that the smooth is not carried out;
setting a correction value attribute, wherein the value range of the correction value is floating point type data between 0 and 1, and the smaller the correction value is, the smoother the skeleton information is;
setting a dithering radius attribute, wherein the dithering radius range is floating point type data between 0 and 1, and when the skeletal point dithering exceeds the set dithering radius, the skeletal point dithering is corrected to be within the dithering radius;
setting the maximum boundary attribute of the jitter radius, taking floating point type data with the value range of 0 to 1, and determining that any point exceeding the maximum boundary of the jitter radius is a new point without being considered as jitter;
setting a predicted frame size attribute, wherein the value range of the predicted frame size attribute is floating point type data between 0 and 1, and the default value is 0;
D. transferring each smoothing parameter in the step C to SkeletonStream. Enable () by calling a kinect skeleton data smoothing algorithm to smooth the skeleton data;
(3) drawing skeleton vectors according to the coordinates of the skeleton points in the skeleton space coordinate system, which realize the skeleton space coordinate standardization in the step (2), calculating the direction cosine value of each skeleton vector, and extracting the direction cosine values as features;
(4) acquiring a training data set and a test data set;
(5) the training data set and the testing data set obtained in the step (4) are used as the input of a Bayesian regularization BP neural network, and 25 bone points and 24 bone segments are identified through the Bayesian regularization BP neural network; the method comprises the following steps:
G. determining the number of layers of the BP neural network to be 3, determining the number of nodes of an input layer to be 3, determining the number of nodes of an output layer to be 1, and determining the number of nodes m of a hidden layer by using a formula (V), wherein the formula (V) is as follows:
in the formula (V), n is the number of nodes of the input layer, l is the number of nodes of the output layer, and α is an integer between 1 and 10;
K. inputting a training data set serving as a training set into a Bayes regularization BP neural network, training and learning each bone vector, inputting a test data set into the trained Bayes regularization BP neural network in a posture correction stage, and identifying the correctness of each bone vector, namely whether the bone vector is matched with the corresponding bone vector of the standard bone posture of the training set;
(6) and analyzing and processing results, and displaying the analysis and processing results of each bone segment and each bone point on a user interface in real time.
2. The method for posture rectification based on depth information and skeletal information as claimed in claim 1, wherein the step (1) comprises the steps of:
A. acquiring skeleton data and depth data of all objects in an effective visual range by using a Kinect2.0 camera, wherein the skeleton data refers to coordinates of skeleton points in a skeleton space coordinate system; the value range of the effective visual range is 0.8-3.0 m; the skeletal points include: 25 skeletal points of the head, neck, right index finger, right thumb, right palm, right wrist, right elbow, right shoulder, center of shoulder, left elbow, left wrist, left palm, left thumb, left index finger, spine, center of hip, right hip, left hip, right knee, left knee, right ankle, left ankle, right foot and left foot;
B. determining the bone points of the target user, and filtering the bone points of other users: and acquiring the depth value of each user bone point by using a Kinect2.0 camera, accumulating, calculating the average depth value of all the bone points of each user, comparing the average depth values, storing the bone point of the target user, and filtering the bone points of other users, wherein the user with the minimum average depth value is the target user.
3. The method for posture rectification based on depth information and skeletal information as claimed in claim 2, wherein the step (3) comprises the steps of:
E. according to the human structural principle, aiming at the 25 bone points extracted in the step (2), sequentially connecting two adjacent bone points to form a bone segment, defining the bone segment as a bone vector, and obtaining 24 bone vectors in total, namely:
F. let any skeleton vectorIs the coordinate in the skeleton space coordinate system as (x)1,y1,z1) Has a coordinate of (x) with the bone point in the bone space coordinate system2,y2,z2) The result of the connection of the bone points of (a),skeletal vectorRepresented by the formula (I):
G. let the skeleton vectorthe included angles with the X axis, the Y axis and the Z axis of the skeleton space coordinate system are α, beta and gamma, so that the skeleton vector can be obtainedThe formula (II), formula (III) and formula (IV) are shown as follows:
through the algorithm, three direction cosine values of 24 skeleton vectors are sequentially calculated, and the cosine values are extracted as features.
4. The method for posture rectification based on depth information and skeletal information as claimed in claim 3, wherein the step (4) comprises the steps of:
H. acquiring a training data set: a target user completes a series of standard actions under the guidance of a professional, and through the steps (1) to (3), the Kinect2.0 camera acquires three directional cosine values of 24 skeleton vectors corresponding to each action, namely a training data set;
I. acquiring a test data set: the target user independently completes a series of actions in the Kinect2.0 camera effective visual range, and through the steps (1) to (3), the Kinect2.0 camera obtains three directional cosine values of 24 skeleton vectors corresponding to each action, namely a test data set.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611251820.3A CN106650687B (en) | 2016-12-30 | 2016-12-30 | Posture correction method based on depth information and skeleton information |
PCT/CN2017/104990 WO2018120964A1 (en) | 2016-12-30 | 2017-09-30 | Posture correction method based on depth information and skeleton information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611251820.3A CN106650687B (en) | 2016-12-30 | 2016-12-30 | Posture correction method based on depth information and skeleton information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106650687A CN106650687A (en) | 2017-05-10 |
CN106650687B true CN106650687B (en) | 2020-05-19 |
Family
ID=58836708
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611251820.3A Active CN106650687B (en) | 2016-12-30 | 2016-12-30 | Posture correction method based on depth information and skeleton information |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN106650687B (en) |
WO (1) | WO2018120964A1 (en) |
Families Citing this family (59)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106650687B (en) * | 2016-12-30 | 2020-05-19 | 山东大学 | Posture correction method based on depth information and skeleton information |
CN107220608B (en) * | 2017-05-22 | 2021-06-08 | 华南理工大学 | Basketball action model reconstruction and defense guidance system and method |
CN106981075A (en) * | 2017-05-31 | 2017-07-25 | 江西制造职业技术学院 | The skeleton point parameter acquisition devices of apery motion mimicry and its recognition methods |
CN107308638B (en) * | 2017-06-06 | 2019-09-17 | 中国地质大学(武汉) | A kind of entertaining rehabilitation training of upper limbs system and method for virtual reality interaction |
CN107481280B (en) * | 2017-08-16 | 2020-05-15 | 北京优时尚科技有限责任公司 | Correction method of skeleton points and computing device |
CN107520843A (en) * | 2017-08-22 | 2017-12-29 | 南京野兽达达网络科技有限公司 | The action training method of one species people's multi-freedom robot |
CN108536292A (en) * | 2018-03-29 | 2018-09-14 | 深圳市芯汉感知技术有限公司 | A kind of data filtering methods and bone point coordinates accurate positioning method |
CN108720841A (en) * | 2018-05-22 | 2018-11-02 | 上海交通大学 | Wearable lower extremity movement correction system based on cloud detection |
CN108919943B (en) * | 2018-05-22 | 2021-08-03 | 南京邮电大学 | Real-time hand tracking method based on depth sensor |
CN109284696A (en) * | 2018-09-03 | 2019-01-29 | 吴佳雨 | A kind of image makings method for improving based on intelligent data acquisition Yu cloud service technology |
CN110991161B (en) * | 2018-09-30 | 2023-04-18 | 北京国双科技有限公司 | Similar text determination method, neural network model obtaining method and related device |
CN109758745B (en) * | 2018-09-30 | 2021-08-31 | 何家淳 | Python/Java-based artificial intelligence basketball training system |
CN111353345B (en) * | 2018-12-21 | 2024-04-16 | 上海史贝斯健身管理有限公司 | Method, apparatus, system, electronic device, and storage medium for providing training feedback |
CN111353347B (en) * | 2018-12-21 | 2023-07-04 | 上海史贝斯健身管理有限公司 | Action recognition error correction method, electronic device, and storage medium |
CN111382596A (en) * | 2018-12-27 | 2020-07-07 | 鸿富锦精密工业(武汉)有限公司 | Face recognition method and device and computer storage medium |
CN109589563B (en) * | 2018-12-29 | 2021-06-22 | 南京华捷艾米软件科技有限公司 | Dance posture teaching and assisting method and system based on 3D motion sensing camera |
CN109815907B (en) * | 2019-01-25 | 2023-04-07 | 深圳市象形字科技股份有限公司 | Sit-up posture detection and guidance method based on computer vision technology |
CN110032958B (en) * | 2019-03-28 | 2020-01-24 | 广州凡拓数字创意科技股份有限公司 | Human body limb language identification method and system |
CN109948579B (en) * | 2019-03-28 | 2020-01-24 | 广州凡拓数字创意科技股份有限公司 | Human body limb language identification method and system |
CN110083239B (en) * | 2019-04-19 | 2022-02-22 | 南京邮电大学 | Bone shake detection method based on dynamic weighting and grey prediction |
CN110334609B (en) * | 2019-06-14 | 2023-09-26 | 斯坦福启天联合(广州)研究院有限公司 | Intelligent real-time somatosensory capturing method |
CN110796699B (en) * | 2019-06-18 | 2024-03-01 | 叠境数字科技(上海)有限公司 | Optimal view angle selection method and three-dimensional human skeleton detection method for multi-view camera system |
CN110263720B (en) * | 2019-06-21 | 2022-12-27 | 中国民航大学 | Action recognition method based on depth image and skeleton information |
CN110472481B (en) * | 2019-07-01 | 2024-01-05 | 华南师范大学 | Sleeping gesture detection method, device and equipment |
CN110490168A (en) * | 2019-08-26 | 2019-11-22 | 杭州视在科技有限公司 | Meet machine human behavior monitoring method in airport based on target detection and skeleton point algorithm |
CN110507986B (en) * | 2019-08-30 | 2023-08-22 | 网易(杭州)网络有限公司 | Animation information processing method and device |
CN110584911A (en) * | 2019-09-20 | 2019-12-20 | 长春理工大学 | Intelligent nursing bed based on prone position recognition |
CN110728220A (en) * | 2019-09-30 | 2020-01-24 | 上海大学 | Gymnastics auxiliary training method based on human body action skeleton information |
CN110751100A (en) * | 2019-10-22 | 2020-02-04 | 北京理工大学 | Auxiliary training method and system for stadium |
CN111046749B (en) * | 2019-11-25 | 2023-05-23 | 西安建筑科技大学 | Human body falling behavior detection method based on depth data |
CN110991292A (en) * | 2019-11-26 | 2020-04-10 | 爱菲力斯(深圳)科技有限公司 | Action identification comparison method and system, computer storage medium and electronic device |
CN110969114B (en) * | 2019-11-28 | 2023-06-09 | 四川省骨科医院 | Human body action function detection system, detection method and detector |
CN112950751B (en) * | 2019-12-11 | 2024-05-14 | 阿里巴巴集团控股有限公司 | Gesture action display method and device, storage medium and system |
CN111402290B (en) * | 2020-02-29 | 2023-09-12 | 华为技术有限公司 | Action restoration method and device based on skeleton key points |
CN111539337A (en) * | 2020-04-26 | 2020-08-14 | 上海眼控科技股份有限公司 | Vehicle posture correction method, device and equipment |
CN111652076B (en) * | 2020-05-11 | 2024-05-31 | 重庆知熠行科技发展有限公司 | Automatic gesture recognition system for AD (analog-to-digital) meter understanding capability test |
CN111617464B (en) * | 2020-05-28 | 2023-02-24 | 西安工业大学 | Treadmill body-building method with action recognition function |
CN111680613B (en) * | 2020-06-03 | 2023-04-14 | 安徽大学 | Method for detecting falling behavior of escalator passengers in real time |
CN111639612A (en) * | 2020-06-04 | 2020-09-08 | 浙江商汤科技开发有限公司 | Posture correction method and device, electronic equipment and storage medium |
CN111860274B (en) * | 2020-07-14 | 2023-04-07 | 清华大学 | Traffic police command gesture recognition method based on head orientation and upper half skeleton characteristics |
CN111950392B (en) * | 2020-07-23 | 2022-08-05 | 华中科技大学 | Human body sitting posture identification method based on depth camera Kinect |
CN112149962B (en) * | 2020-08-28 | 2023-08-22 | 中国地质大学(武汉) | Risk quantitative assessment method and system for construction accident cause behaviors |
CN112149531B (en) * | 2020-09-09 | 2022-07-08 | 武汉科技大学 | Human skeleton data modeling method in behavior recognition |
CN112446433A (en) * | 2020-11-30 | 2021-03-05 | 北京数码视讯技术有限公司 | Method and device for determining accuracy of training posture and electronic equipment |
CN112494034B (en) * | 2020-11-30 | 2023-01-17 | 重庆优乃特医疗器械有限责任公司 | Data processing and analyzing system and method based on 3D posture detection and analysis |
CN112434639A (en) * | 2020-12-03 | 2021-03-02 | 郑州捷安高科股份有限公司 | Action matching method, device, equipment and storage medium |
CN112641441B (en) * | 2020-12-18 | 2024-01-02 | 河南翔宇医疗设备股份有限公司 | Posture evaluation method, system, device and computer readable storage medium |
CN112749671A (en) * | 2021-01-19 | 2021-05-04 | 澜途集思生态科技集团有限公司 | Human behavior recognition method based on video |
CN112966370B (en) * | 2021-02-09 | 2022-04-19 | 武汉纺织大学 | Design method of human body lower limb muscle training system based on Kinect |
CN112906604B (en) * | 2021-03-03 | 2024-02-20 | 安徽省科亿信息科技有限公司 | Behavior recognition method, device and system based on skeleton and RGB frame fusion |
CN113486757B (en) * | 2021-06-29 | 2022-04-05 | 北京科技大学 | Multi-person linear running test timing method based on human skeleton key point detection |
CN113609993B (en) * | 2021-08-06 | 2024-10-18 | 烟台艾睿光电科技有限公司 | Attitude estimation method, apparatus, equipment and computer readable storage medium |
CN114091511B (en) * | 2021-09-22 | 2024-08-02 | 广东工业大学 | Body-building action scoring method, system and device based on space-time information |
CN114171126B (en) * | 2021-10-26 | 2024-10-01 | 深圳晶泰科技有限公司 | Construction method, training method and related device of molecular training set |
CN116030137A (en) * | 2021-10-27 | 2023-04-28 | 华为技术有限公司 | Parameter determination method and related equipment |
CN114299604B (en) * | 2021-11-23 | 2024-07-12 | 河北汉光重工有限责任公司 | Two-dimensional image-based hand skeleton capturing and gesture distinguishing method |
CN114360060B (en) * | 2021-12-31 | 2024-04-09 | 北京航空航天大学杭州创新研究院 | Human body action recognition and counting method |
CN115497596B (en) * | 2022-11-18 | 2023-04-07 | 深圳聚邦云天科技有限公司 | Human body motion process posture correction method and system based on Internet of things |
CN117809380B (en) * | 2024-02-29 | 2024-05-14 | 万有引力(宁波)电子科技有限公司 | Gesture tracking method, gesture tracking device, gesture tracking apparatus, gesture tracking program product and readable storage medium |
Family Cites Families (16)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6788809B1 (en) * | 2000-06-30 | 2004-09-07 | Intel Corporation | System and method for gesture recognition in three dimensions using stereo imaging and color vision |
CN102824176B (en) * | 2012-09-24 | 2014-06-04 | 南通大学 | Upper limb joint movement degree measuring method based on Kinect sensor |
CN103230664B (en) * | 2013-04-17 | 2015-07-01 | 南通大学 | Upper limb movement rehabilitation training system and method based on Kinect sensor |
CN103246891B (en) * | 2013-05-28 | 2016-07-06 | 重庆邮电大学 | A kind of Chinese Sign Language recognition methods based on Kinect |
CN103473562B (en) * | 2013-09-18 | 2017-01-11 | 王碧春 | Automatic training and identifying system for specific human body action |
CN103489000A (en) * | 2013-09-18 | 2014-01-01 | 柳州市博源环科科技有限公司 | Achieving method of human movement recognition training system |
CN104200491A (en) * | 2014-08-15 | 2014-12-10 | 浙江省新华医院 | Motion posture correcting system for human body |
CN104517097A (en) * | 2014-09-24 | 2015-04-15 | 浙江大学 | Kinect-based moving human body posture recognition method |
CN104484574A (en) * | 2014-12-25 | 2015-04-01 | 东华大学 | Real-time human body gesture supervised training correction system based on quaternion |
CN104524742A (en) * | 2015-01-05 | 2015-04-22 | 河海大学常州校区 | Cerebral palsy child rehabilitation training method based on Kinect sensor |
CN104722056A (en) * | 2015-02-05 | 2015-06-24 | 北京市计算中心 | Rehabilitation training system and method using virtual reality technology |
CN105005769B (en) * | 2015-07-08 | 2018-05-15 | 山东大学 | A kind of sign Language Recognition Method based on depth information |
CN105307017A (en) * | 2015-11-03 | 2016-02-03 | Tcl集团股份有限公司 | Method and device for correcting posture of smart television user |
CN105807926B (en) * | 2016-03-08 | 2019-06-21 | 中山大学 | A kind of unmanned plane man-machine interaction method based on three-dimensional continuous dynamic hand gesture recognition |
CN106022213B (en) * | 2016-05-04 | 2019-06-07 | 北方工业大学 | A kind of human motion recognition method based on three-dimensional bone information |
CN106650687B (en) * | 2016-12-30 | 2020-05-19 | 山东大学 | Posture correction method based on depth information and skeleton information |
-
2016
- 2016-12-30 CN CN201611251820.3A patent/CN106650687B/en active Active
-
2017
- 2017-09-30 WO PCT/CN2017/104990 patent/WO2018120964A1/en active Application Filing
Also Published As
Publication number | Publication date |
---|---|
WO2018120964A1 (en) | 2018-07-05 |
CN106650687A (en) | 2017-05-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106650687B (en) | Posture correction method based on depth information and skeleton information | |
CN114067358B (en) | Human body posture recognition method and system based on key point detection technology | |
Chaudhari et al. | Yog-guru: Real-time yoga pose correction system using deep learning methods | |
CN106384093B (en) | A kind of human motion recognition method based on noise reduction autocoder and particle filter | |
CN107909060A (en) | Gymnasium body-building action identification method and device based on deep learning | |
CN102074034B (en) | Multi-model human motion tracking method | |
CN114724241A (en) | Motion recognition method, device, equipment and storage medium based on skeleton point distance | |
CN110674785A (en) | Multi-person posture analysis method based on human body key point tracking | |
Verma et al. | Gesture recognition using kinect for sign language translation | |
CN112800892B (en) | Human body posture recognition method based on openposition | |
CN113255522B (en) | Personalized motion attitude estimation and analysis method and system based on time consistency | |
CN110956141B (en) | Human body continuous action rapid analysis method based on local recognition | |
CN110490109A (en) | A kind of online human body recovery action identification method based on monocular vision | |
CN114998983A (en) | Limb rehabilitation method based on augmented reality technology and posture recognition technology | |
CN111383735A (en) | Unmanned body-building analysis method based on artificial intelligence | |
CN110956139A (en) | Human motion action analysis method based on time series regression prediction | |
CN113705540A (en) | Method and system for recognizing and counting non-instrument training actions | |
Yang et al. | Human exercise posture analysis based on pose estimation | |
CN113663312A (en) | Micro-inertia-based non-apparatus body-building action quality evaluation method | |
Sheu et al. | Improvement of human pose estimation and processing with the intensive feature consistency network | |
Amaliya et al. | Study on hand keypoint framework for sign language recognition | |
CN118380096A (en) | Rehabilitation training interaction method and device based on algorithm tracking and virtual reality | |
Sharma et al. | Real-time recognition of yoga poses using computer vision for smart health care | |
CN102930250B (en) | A kind of action identification method of multi-scale random field models | |
CN113240714B (en) | Human motion intention prediction method based on context awareness network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |