CN110473602B - Body state data collection processing method for wearable body sensing game device - Google Patents

Body state data collection processing method for wearable body sensing game device Download PDF

Info

Publication number
CN110473602B
CN110473602B CN201910559004.6A CN201910559004A CN110473602B CN 110473602 B CN110473602 B CN 110473602B CN 201910559004 A CN201910559004 A CN 201910559004A CN 110473602 B CN110473602 B CN 110473602B
Authority
CN
China
Prior art keywords
posture
data samples
data
theta
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910559004.6A
Other languages
Chinese (zh)
Other versions
CN110473602A (en
Inventor
冯俊燕
佴威至
单玲
贾飞勇
杜琳
燕学智
孙晓颖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jilin University
Original Assignee
Jilin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jilin University filed Critical Jilin University
Priority to CN201910559004.6A priority Critical patent/CN110473602B/en
Publication of CN110473602A publication Critical patent/CN110473602A/en
Application granted granted Critical
Publication of CN110473602B publication Critical patent/CN110473602B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/23Recognition of whole body movements, e.g. for sport training
    • G06V40/25Recognition of walking or running movements, e.g. gait recognition
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Multimedia (AREA)
  • General Engineering & Computer Science (AREA)
  • Social Psychology (AREA)
  • Medical Informatics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Epidemiology (AREA)
  • Primary Health Care (AREA)
  • Public Health (AREA)
  • Processing Or Creating Images (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to a body state data collection processing method for wearable body sensing game equipment, which comprises the following steps: 1. defining a posture expression; 2. randomly generating discrete target posture according to the posture expression definition; 3. collecting data samples of discrete target states; 4. randomly generating continuous target posture according to the posture expression definition; 5. collecting data samples of continuous target body states; 6. correcting data samples of continuous target body states to form a body state data set; 7. the posture recognition algorithm is trained using the obtained training data set. The posture expression in the collected data set can be artificially and freely defined, so that a rehabilitation therapist can be supported to define the posture according to the requirements of the rehabilitation training of the cerebral palsy infant, and the posture expression is not limited only by the recognition algorithm technology, so that the posture expression form in the system is matched with the posture expression form understood by the therapist, and the pertinence of the somatosensory game system for the rehabilitation training of the cerebral palsy infant to diseases is enhanced.

Description

Body state data collection processing method for wearable body sensing game device
Technical Field
The invention belongs to the fields of human-computer interaction, virtual reality and the like, and particularly relates to a body state data collection processing method for wearable body sensing game equipment.
Background
Cerebral palsy is a group of persistent central motor and postural developmental disorders, the motor limitation syndrome, which is caused by non-progressive brain injury in the developing fetus or infant. At present, the incidence rate of cerebral palsy in China is 2.48 per thousand, and the number of new born children is estimated according to 1600 million per year, and the number of new cerebral palsy is about 4 million per year. Cerebral palsy children are often accompanied by limb dysfunction including flexion of the elbow, pronation of the forearm, wrist droop, flexion or hyperextension of the fingers, adduction of the thumb, and the like. Limb dysfunction can also affect the development of other functions to varying degrees, such as sensation, fine motor ability, gross motor ability, cognitive ability, and ability to live in daily life. The limb dysfunction becomes an important factor influencing the life quality of children with cerebral palsy, and the improvement of the ability of the children in the daily living environment becomes an important subject of modern rehabilitation.
The treatment method for upper limb dysfunction of children with cerebral palsy comprises physical therapy, operation therapy, forced induction exercise therapy and the like. Although various therapies are widely applied to rehabilitation of upper limb dysfunction of children with cerebral palsy at present, the clinical effects and advantages and disadvantages of the therapies are different. In addition, physical therapy, operation therapy and the like require guidance from professional rehabilitation therapists, but the rehabilitation therapists face a large gap. In recent years, the wearable somatosensory game becomes one of novel means for rehabilitation training of four limb functions of cerebral palsy children at home and abroad, and is one of treatment methods which can be automatically implemented and operated by a child guardian in a home environment. The wearable motion sensing game is one of video games, a tracking device capable of obtaining spatial information such as self position, posture and acceleration is used for multiple purposes, limb actions of children are collected and mapped into game operation, and the children are encouraged to conduct training actions spontaneously by using interestingness of the game. The realization of the wearable motion sensing game depends on a posture recognition algorithm, and the training of the posture recognition algorithm needs a posture data set. The existing body state data set acquisition and processing method depends on expensive motion capture equipment, or the expression form of the used body posture cannot be freely defined manually and is not matched with the expression form of the body posture understood by a therapist, so that the applicability of the wearable body sensing game to the rehabilitation requirement of special diseases such as cerebral palsy is reduced.
Patent CN107174255A discloses a three-dimensional gait information acquisition method based on Kinect somatosensory technology, which uses threshold filtering and double-index filtering to stabilize skeleton node data, uses a gait event capturing method to identify gait events, and has the advantages of low cost, no mark point to capture three-dimensional skeleton nodes, simple operation and the like, but the method is not directed to training of a posture identification algorithm.
Disclosure of Invention
The invention aims to provide a body state data set acquisition and processing method for wearable body sensing game equipment. The posture expression in the collected data set can be artificially and freely defined, so that a rehabilitation therapist can be supported to define the posture according to the requirements of cerebral palsy infant rehabilitation training, and the recognition algorithm technology is not only supported to limit the posture expression, so that the posture expression form in the system is matched with the posture expression form understood by the therapist, and the problem that the body sensing game system for cerebral palsy infant rehabilitation training is weak in disease pertinence due to mismatching of the posture expression form and the posture expression form is solved; expensive motion capture equipment is avoided, lower-cost scheme steps can be provided for development of a somatosensory game system for rehabilitation training of cerebral palsy children, and the popularity of the system is improved.
The purpose of the invention is realized by adopting the following technical scheme:
a body state data collection processing method for a wearable body sensing game device comprises the following steps:
1 defining the expression of a posture;
2 randomly generating discrete target posture according to the posture expression definition;
3, collecting data samples of discrete target states;
4 randomly generating continuous target posture according to the posture expression definition;
5, acquiring data samples of continuous target states;
6, correcting data samples of continuous target posture to form a posture data set;
and 7, training the posture recognition algorithm by using the obtained training data set.
In step 1, according to the interactive action designed by the motion sensing game, a definition method of normalization numerical values is used, the state of the corresponding body action is defined by using numerical values, and two definition methods can be used
1) For interactive actions which can be completed only by single body joint movement, two limit states on a designated action dimension are respectively defined as theta being 0 and theta being 1;
2) for an interactive action which can be completed only by the matching movement of a plurality of body joints, two limit states of the position of the relevant body part relative to other parts are respectively defined as theta-0 and theta-1.
In step 2, randomly generating N according to the posture expression defined in step 1iIndividual state value
Figure BDA0002107717310000021
Corresponds to NiFor each discrete target state, the random values are generated using a triangular distribution centered at 0.5:
Figure BDA0002107717310000022
in step 3, collecting data samples of discrete target states, specifically comprising the following steps:
step 3.1: displaying the 1 st discrete target posture generated in the step 2 by using a visual graph;
step 3.2: making a target posture action according to the graphic display;
step 3.3: triggering one-time acquisition of posture data of the wearable motion sensing game equipment when the action is kept unchanged, and storing one data sample;
step 3.4: repeating steps 3.1 to 3.3, respectively for N generated in step 2iCollecting and storing N discrete target statesiA number of data samples.
In step 4, randomly generating N according to the posture expression defined in step 1gTo body state value
Figure BDA0002107717310000031
Figure BDA0002107717310000032
Corresponds to NgSuccessive target states, each pair of state values comprising two state values theta at a distance of not less than 0.51And theta2As the starting and ending values of the successive target states, the generation of the random values uses a uniform distribution:
p(θ1)=1,(0≤θ1≤1)
p(θ2)=1,(0≤θ2≤1)
θ21≥0.5
in step 5, collecting data samples of continuous target posture, specifically comprising the following steps:
step 5.1: displaying the 1 st continuous target posture generated in the step 4 by using three visual graphs, wherein the first visual graph displays the starting value theta of the continuous target posture1Corresponding body state, the second visual figure shows the end value theta of the continuous target body state2The third visual graph displays the animation of the transition from the initial state to the end state of the continuous target posture;
and step 5.2: making an initial value θ from the graphical display1Corresponding posture action;
step 5.3: triggering a command that wearable motion sensing game equipment starts to collect data samples;
step 5.4: the gradual body state change action is started immediately after the step 5.3 is completed, and the starting value theta is changed1The corresponding posture motion is gradually transited to the termination value theta2Corresponding posture action, constant speed and unlimited time;
step 5.5: when the body state action becomes the termination value theta2Triggering the wearable motion sensing game equipment to stop acquiring a command of a data sample during the corresponding posture action;
step 5.6: between step 5.3 and 5.5Resampling an indefinite number of acquired data samples to a fixed number of K data samples thetag (1)g (2),…,θg (K)
Step 5.7: marking the K data samples obtained by resampling in sequence and in a linear relation to be positioned at the initial value theta1And an end value θ2Body state value in between:
Figure BDA0002107717310000033
step 5.8: repeating steps 5.1 to 5.7, respectively for N generated in step 4gSamples are collected and stored for successive targets, thereby obtaining NgGroup data samples containing KNgA number of data samples.
In step 6, the data samples of the discrete target posture acquired in step 3 are used to correct the data samples of the continuous target posture acquired in step 5, and the method specifically includes the following steps:
step 6.1: training a posture estimation algorithm A by using the data samples of the discrete target postures acquired in the step 3;
step 6.2: respectively estimating a posture value theta for each sample in the K continuous target posture data samples obtained in the step 5.6 by using the trained algorithm Ae (k)
Step 6.3: using a quadratic function to establish a distortion mapping model from a K epsilon [1, K ] interval to a j epsilon [1, K ] interval:
j(k)=ak2+bk+c
j(1)=1
j(K)=K
step 6.4: error value e of K samples(k)The sum is an energy function E, model parameters a, b and c which enable the energy function to be minimum are solved by using an optimization method to obtain an optimized distortion mapping model, wherein the energy function is as follows:
Figure BDA0002107717310000041
wherein [ j (k) ] denotes rounding off j (k) to an integer.
Step 6.5: using the optimized warped mapping model j (k) to label the body state value theta of each data sampleg (k)Substitution to thetag ([j(k)])
Step 6.6: repeating steps 6.1 to 6.5, respectively for N generated in step 5gAnd correcting the group data samples to form a posture data set which can be used for training.
And 7: and (5) taking the posture data set obtained in the step (6) as a training set, training a posture recognition algorithm by using a neural network algorithm comprising 2 hidden layers and 5 nodes in each hidden layer, wherein the trained algorithm can be applied to a wearable somatosensory game system to support the rehabilitation training of cerebral palsy children.
The invention has the advantages that:
1. the posture expression in the collected data set can be artificially and freely defined, so that a rehabilitation therapist can be supported to define the posture according to the requirements of the rehabilitation training of the cerebral palsy children, and the recognition algorithm technology is not only supported to limit the posture expression, so that the posture expression form in the system is matched with the posture expression form understood by the therapist, the therapist can participate in the development of the wearable somatosensory game system for the rehabilitation training of the cerebral palsy children more deeply, and the system has stronger pertinence to the application of the cerebral palsy children rehabilitation training;
2. the data collection method avoids using expensive motion capture equipment, can provide a scheme step with lower cost for the development of a motion sensing game system for rehabilitation training of children with cerebral palsy, and improves the popularity of the system.
Drawings
FIG. 1 is a working schematic diagram of a wearable motion sensing game device;
FIG. 2 is a flow chart of a data set acquisition processing method of the present invention;
FIG. 3 is a schematic diagram of a method for correcting data samples of continuous target posture;
fig. 4 shows a commercial Tracker Vive Tracker used in example 1, and the wearing method used in example 1.
Detailed Description
The following describes the implementation process of the present invention with reference to the attached drawings.
As shown in fig. 1, the core of the operation of the motion sensing game device is a posture recognition algorithm. Before the system is actually applied, a posture recognition algorithm needs a posture data set for training, and the posture data set needs to be acquired by the same body sensing game device. When the system is actually applied, the trained posture recognition algorithm recognizes the user's motion as a posture and maps the posture to game operation, so as to realize the function of game operation through body motion.
As shown in fig. 2, according to the body state data set acquisition processing method for wearable body sensing game devices provided by the invention, discrete target body states and continuous target body states are respectively generated at random by using the body posture expression defined by human, data samples are acquired for the two target body states, and the data samples of the continuous target body states are corrected by using the data samples of the discrete target body states, so that a data set for training a body state recognition algorithm is formed. Comprises the following steps:
1 defining the expression of a posture;
2, randomly generating discrete target states;
3, collecting data samples of discrete target states;
4 randomly generating continuous target states;
5, acquiring data samples of continuous target states;
6, correcting data samples of continuous target posture to form a posture data set for training;
and 7, training the posture recognition algorithm by using the obtained training data set.
Example 1
The data acquisition method provided by the invention needs any Tracker capable of obtaining absolute spatial position and posture with 6 degrees of freedom, and two trackers Vive Tracker in HTC Vive commercial virtual reality equipment are used in the embodiment, as shown in FIG. 4, and are respectively worn near the palm of a healthy adult and near the elbow joint of the forearm through fixing belts. The embodiment collects a posture data set aiming at the wrist joint back extension action, trains a wrist joint back extension posture recognition algorithm, and is applied to a motion sensing game system.
In step 1, according to the interactive action designed by the motion sensing game, a definition method of a normalized numerical value is used, and the state of the corresponding body action is defined by using the numerical value, and two definition methods can be used:
1) for interactive actions which can be completed only by single body joint movement, two limit states on a designated action dimension are respectively defined as theta being 0 and theta being 1;
2) for an interactive action which can be completed only by the matching movement of a plurality of body joints, two limit states of the position of the relevant body part relative to other parts are respectively defined as theta-0 and theta-1.
The present embodiment is directed to the wrist joint back extension action, which belongs to an interactive action that can be completed only by a single body joint motion, and defines the wrist joint without back extension as θ 0 and the wrist joint back extension of 90 degrees as θ 1.
In step 2, randomly generating N according to the posture expression defined in step 1iIndividual state value
Figure BDA0002107717310000061
Corresponds to NiFor each discrete target state, the random values are generated using a triangular distribution centered at 0.5:
Figure BDA0002107717310000062
in this embodiment, N is usedi=2400。
In step 3, collecting data samples of discrete target states, specifically comprising the following steps:
step 3.1: displaying the 1 st discrete target posture generated in the step 2 by using a visual graph;
step 3.2: making a target posture action according to the graphic display;
step 3.3: triggering one-time acquisition of posture data of the wearable motion sensing game equipment when the action is kept unchanged, and storing one data sample;
step 3.4: repeating steps 3.1 to 3.3, respectively for N generated in step 2iCollecting and storing N discrete target statesiA number of data samples.
In this embodiment, 300 samples are provided by 8 users per person.
In step 4, randomly generating N according to the posture expression defined in step 1gTo body state value
Figure BDA0002107717310000063
Figure BDA0002107717310000064
Corresponds to NgSuccessive target states, each pair of state values comprising two state values theta at a distance of not less than 0.51And theta2As the starting and ending values of the successive target states, the generation of the random values uses a uniform distribution:
p(θ1)=1,(0≤θ1≤1)
p(θ2)=1,(0≤θ2≤1)
if a pair of randomly generated posture values do not satisfy
θ21≥0.5
The pair of posture values is discarded, a pair of posture values is re-generated randomly, and the process is repeated.
In this embodiment, N is usedg=80。
In step 5, collecting data samples of continuous target posture, specifically comprising the following steps:
step 5.1: displaying the 1 st continuous target posture generated in the step 4 by using three visual graphs, wherein the first visual graph displays the starting value theta of the continuous target posture1Corresponding to the body state, the second visual figure shows the end value theta of the continuous target body state2The third visual graph displays the animation of the transition of the continuous target posture from the initial state to the end state;
and step 5.2: making a starting value theta based on the graphical display1Corresponding posture action;
step 5.3: triggering a command that the wearable motion sensing game device starts to collect data samples;
step 5.4: immediately starting to gradually change the posture while completing step 5.3, starting from the initial value theta1The corresponding posture motion is gradually transited to the termination value theta2Corresponding posture action, constant speed and unlimited time;
and step 5.5: when the body state action becomes the termination value theta2Triggering the wearable motion sensing game equipment to stop acquiring a command of a data sample during the corresponding posture action;
step 5.6: resampling the indefinite number of data samples collected between steps 5.3 to 5.5 to a fixed number of K data samples thetag (1)g (2),…,θg (K)
Step 5.7: marking the K data samples obtained by resampling in sequence and in a linear relation to be positioned at the initial value theta1And an end value θ2Body state value in between:
Figure BDA0002107717310000071
step 5.8: repeating steps 5.1 to 5.7, respectively for N generated in step 4gSamples are collected and stored for successive targets, thereby obtaining NgData samples of data, containing KNgA number of data samples.
In the embodiment, each of 8 users provides 10 groups of data samples with variable quantity, and each group of data samples with variable quantity is resampled into 50 data samples with K.
In step 6, the data samples of the discrete target posture acquired in step 3 are used to correct the data samples of the continuous target posture acquired in step 5, and the method specifically includes the following steps:
step 6.1: training a posture estimation algorithm A by using the data samples of the discrete target postures acquired in the step 3;
step 6.2: respectively estimating a posture value theta for each sample in the K continuous target posture data samples obtained in the step 5.6 by using the trained algorithm Ae (k)
Step 6.3: as shown in fig. 3, a distortion mapping model from a K e [1, K ] interval to a j e [1, K ] interval is established by using a quadratic function, a posture recognition algorithm is trained by using data samples of discrete target postures, posture estimation is performed on data samples of continuous target postures by using the algorithm, the difference between an estimation value and an original mark value is optimized, and a group of K data samples of original continuous target postures is re-marked.
j(k)=ak2+bk+c
j(1)=1
j(K)=K
Step 6.4: error value e of K samples(k)The sum is an energy function E, model parameters a, b and c which enable the energy function to be minimum are solved by using an optimization method to obtain an optimized distortion mapping model, wherein the energy function is as follows:
Figure BDA0002107717310000081
wherein [ j (k) ] denotes rounding off j (k) to an integer.
Step 6.5: using the optimized warped mapping model j (k) to label the body state value theta of each data sampleg (k)Substitution to thetag ([j(k)])
Step 6.6: repeating steps 6.1 to 6.5, respectively for N generated in step 5gAnd correcting the group data samples to form a posture data set.
And 7: and (4) taking the posture data set obtained in the step (6) as a training set, and training a posture recognition algorithm by using a neural network algorithm which comprises 2 hidden layers and 5 nodes in each hidden layer.
The trained posture recognition algorithm is an algorithm capable of recognizing the dorsum extension angle of the wrist joint, the form of the algorithm is a computer program, the real-time position and posture tracking data of the two Vive Tracker are used as input, and the dorsum extension angle of the wrist joint is used as output. The program is embedded in a motion sensing game program as a module, when a cerebral palsy child wears two Vive Tracker by using the same wearing method, the dorsum extension angle of the wrist joint of the child is obtained in real time, the angle is mapped to game operation, and the child is encouraged to spontaneously make training actions by using the interest of the game.

Claims (4)

1. A body state data collection processing method for wearable body sensing game equipment is characterized by comprising the following steps: defining a posture expression according to game interaction action requirements, randomly generating a discrete target posture and a continuous target posture, collecting and storing data samples of the two target postures, correcting the data samples of the continuous target posture, and forming a posture training data set by the corrected data samples of the continuous target posture, wherein the posture expression comprises the following steps:
step 1, defining the posture expression:
according to the interactive action designed by the motion sensing game, a definition method of a normalized numerical value is used for defining the state of the corresponding body action by using the numerical value;
step 2, randomly generating a discrete target posture:
randomly generating N according to the morphological expression defined in step 1iIndividual state value
Figure FDA0002107717300000011
Corresponds to NiFor each discrete target state, the random values are generated using a triangular distribution centered at 0.5 for θ:
Figure FDA0002107717300000012
in step 3, collecting data samples of discrete target states:
the method specifically comprises the following steps:
3.1: displaying the 1 st discrete target posture generated in the step 2 by using a visual graph;
3.2: making a target posture action according to the graphic display;
3.3: triggering one-time acquisition of posture data of the wearable motion sensing game equipment when the action is kept unchanged, and storing one data sample;
3.4: repeating steps 3.1 to 3.3, respectively for N generated in step 2iCollecting and storing N discrete target statesiA data sample;
in step 4, randomly generating a continuous target posture:
randomly generating N according to the morphological expression defined in step 1gTo body state value
Figure FDA0002107717300000013
Figure FDA0002107717300000014
Corresponds to NgSuccessive target states, each pair of state values comprising two state values theta at a distance of not less than 0.51And theta2As the starting and ending values of the successive target states, the generation of the random values uses a uniform distribution:
p(θ1)=1,(0≤θ1≤1)
p(θ2)=1,(0≤θ2≤1)
θ21≥0.5
and 5, acquiring data samples of continuous target states:
the method specifically comprises the following steps:
5.1: displaying the 1 st continuous target posture generated in the step 4 by using three visual graphs, wherein the first visual graph displays the starting value theta of the continuous target posture1Corresponding to the body state, the second visual figure shows the end value theta of the continuous target body state2The third visual graph displays the animation of the transition from the initial state to the end state of the continuous target posture;
5.2: making an initial value θ from the graphical display1Corresponding posture action;
5.3: triggering a command that the wearable motion sensing game device starts to collect data samples;
5.4: immediately starting to gradually change the posture while completing step 5.3, starting from the initial value theta1The corresponding posture motion is gradually transited to the termination value theta2Corresponding posture action, constant speed and unlimited time;
5.5: when the body state action becomes the termination value theta2Triggering the wearable motion sensing game equipment to stop acquiring a command of a data sample during the corresponding posture action;
5.6: resampling the indefinite number of data samples collected between steps 5.3 to 5.5 to a fixed number of K data samples thetag (1),θg (2),...,θg (K)
5.7: marking the K data samples obtained by resampling in sequence and in a linear relation to be positioned at the initial value theta1And an end value θ2Body state value in between:
Figure FDA0002107717300000021
5.8: repeating steps 5.1 to 5.7, respectively for N generated in step 4gSamples are collected and stored for successive targets, thereby obtaining NgGroup data samples containing KNgA data sample;
step 6, correcting the data samples of the continuous target posture acquired in the step 5 by using the data samples of the discrete target posture acquired in the step 3 to form a posture training data set:
the method specifically comprises the following steps:
6.1: training a posture estimation algorithm A by using the data samples of the discrete target postures acquired in the step 3;
6.2: respectively estimating a posture value theta for each sample in the K continuous target posture data samples obtained in the step 5.6 by using the trained algorithm Ae (k)
6.3: using a quadratic function to establish a distortion mapping model from a K epsilon [1, K ] interval to a j epsilon [1, K ] interval:
j(k)=ak2+bk+c
j(1)=1
j(K)=K
step 6.4: error value e of K samples(k)The sum is an energy function E, model parameters a, b and c which enable the energy function to be minimum are solved by using an optimization method to obtain an optimized distortion mapping model, wherein the energy function is as follows:
Figure FDA0002107717300000022
wherein [ j (k) ] denotes the rounding off of j (k) to an integer;
step 6.5: marking the mark state value theta of each data sample by using the optimized warped mapping model j (k)g (k)Substitution to thetag ([j(k)])
Step 6.6: repeating steps 6.1 to 6.5, respectively for N generated in step 5gCorrecting group data samples to form a posture training data set;
and 7: and (5) training a posture recognition algorithm by using the training data set obtained in the step (6).
2. The method for collecting and processing the body posture data of the wearable body feeling game device according to claim 1, wherein: for interactive movements that can be accomplished with only a single body joint movement, the two extreme states in the specified movement dimension are defined as θ -0 and θ -1, respectively.
3. The body state data collection processing method for the wearable body sensing game device, according to claim 1, wherein: for an interactive action which can be completed only by the matching movement of a plurality of body joints, two limit states of the position of the relevant body part relative to other parts are respectively defined as theta-0 and theta-1.
4. The body state data collection processing method for the wearable body sensing game device, according to claim 1, wherein: and (5) taking the posture data set obtained in the step (6) as a training set, and training a posture recognition algorithm by using a neural network algorithm which comprises 2 hidden layers and 5 nodes in each hidden layer.
CN201910559004.6A 2019-06-26 2019-06-26 Body state data collection processing method for wearable body sensing game device Active CN110473602B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910559004.6A CN110473602B (en) 2019-06-26 2019-06-26 Body state data collection processing method for wearable body sensing game device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910559004.6A CN110473602B (en) 2019-06-26 2019-06-26 Body state data collection processing method for wearable body sensing game device

Publications (2)

Publication Number Publication Date
CN110473602A CN110473602A (en) 2019-11-19
CN110473602B true CN110473602B (en) 2022-05-24

Family

ID=68507523

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910559004.6A Active CN110473602B (en) 2019-06-26 2019-06-26 Body state data collection processing method for wearable body sensing game device

Country Status (1)

Country Link
CN (1) CN110473602B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021114230A1 (en) * 2019-12-13 2021-06-17 中国科学院深圳先进技术研究院 Three-dimensional gait data processing method and system, server, and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205126248U (en) * 2015-11-16 2016-04-06 台州学院 Novel action coordination capability test instrument
CN107358210A (en) * 2017-07-17 2017-11-17 广州中医药大学 Human motion recognition method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10166680B2 (en) * 2015-07-31 2019-01-01 Heinz Hemken Autonomous robot using data captured from a living subject
US10854104B2 (en) * 2015-08-28 2020-12-01 Icuemotion Llc System for movement skill analysis and skill augmentation and cueing

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205126248U (en) * 2015-11-16 2016-04-06 台州学院 Novel action coordination capability test instrument
CN107358210A (en) * 2017-07-17 2017-11-17 广州中医药大学 Human motion recognition method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Application of Wearable Device HTC VIVE in Upper Limb Rehabilitation Training;Donglin Chen 等;《2018 2nd IEEE Advanced Information Management,Communicates,Electronic and Automation Control Conference (IMCEC)》;20180924;第1460-1464页 *
基于微型惯性传感器腿部康复动作捕捉系统研究;李经玮;《中国优秀硕士学位论文全文数据库 医药卫生科技辑》;20180315;第2018年卷(第03期);第E060-141页 *

Also Published As

Publication number Publication date
CN110473602A (en) 2019-11-19

Similar Documents

Publication Publication Date Title
Da Gama et al. Motor rehabilitation using Kinect: a systematic review
Bai et al. Development of a novel home based multi-scene upper limb rehabilitation training and evaluation system for post-stroke patients
Xu et al. Emotion recognition from gait analyses: Current research and future directions
Chen et al. Eyebrow emotional expression recognition using surface EMG signals
CN104524742A (en) Cerebral palsy child rehabilitation training method based on Kinect sensor
CN109453509A (en) It is a kind of based on myoelectricity and motion-captured virtual upper limb control system and its method
CN104274183A (en) Motion information processing apparatus
CN111631731B (en) Near-infrared brain function and touch force/motion information fusion assessment method and system
Miao et al. Upper limb rehabilitation system for stroke survivors based on multi-modal sensors and machine learning
Feng et al. Teaching training method of a lower limb rehabilitation robot
WO2023206833A1 (en) Wrist rehabilitation training system based on muscle synergy and variable stiffness impedance control
CN104571837B (en) A kind of method and system for realizing man-machine interaction
CN107491648A (en) Hand recovery training method based on Leap Motion motion sensing control devices
CN110232963A (en) A kind of upper extremity exercise functional assessment system and method based on stereo display technique
Song et al. Activities of daily living-based rehabilitation system for arm and hand motor function retraining after stroke
Brooks et al. Quantifying upper-arm rehabilitation metrics for children through interaction with a humanoid robot
Song et al. Proposal of a wearable multimodal sensing-based serious games approach for hand movement training after stroke
Luo et al. An interactive therapy system for arm and hand rehabilitation
Schez-Sobrino et al. A distributed gamified system based on automatic assessment of physical exercises to promote remote physical rehabilitation
CN110473602B (en) Body state data collection processing method for wearable body sensing game device
CN111312363B (en) Double-hand coordination enhancement system based on virtual reality
CN113035000A (en) Virtual reality training system for central integrated rehabilitation therapy technology
CN111857352A (en) Gesture recognition method based on imagination type brain-computer interface
Groenegress et al. The physiological mirror—a system for unconscious control of a virtual environment through physiological activity
CN207886596U (en) A kind of VR rehabilitation systems based on mirror neuron

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant