CN113515967B - Motion intention recognition model generation method, device, equipment and storage medium - Google Patents

Motion intention recognition model generation method, device, equipment and storage medium Download PDF

Info

Publication number
CN113515967B
CN113515967B CN202010229806.3A CN202010229806A CN113515967B CN 113515967 B CN113515967 B CN 113515967B CN 202010229806 A CN202010229806 A CN 202010229806A CN 113515967 B CN113515967 B CN 113515967B
Authority
CN
China
Prior art keywords
sample
intention recognition
joint
movement
recognition model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010229806.3A
Other languages
Chinese (zh)
Other versions
CN113515967A (en
Inventor
林旭
陶大鹏
吴婉银
王汝欣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Union Vision Innovation Technology Co ltd
Original Assignee
Shenzhen Union Vision Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Union Vision Innovation Technology Co ltd filed Critical Shenzhen Union Vision Innovation Technology Co ltd
Priority to CN202010229806.3A priority Critical patent/CN113515967B/en
Publication of CN113515967A publication Critical patent/CN113515967A/en
Application granted granted Critical
Publication of CN113515967B publication Critical patent/CN113515967B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2218/00Aspects of pattern recognition specially adapted for signal processing
    • G06F2218/12Classification; Matching
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/1036Measuring load distribution, e.g. podologic studies
    • A61B5/1038Measuring plantar pressure during gait
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Dentistry (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Signal Processing (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a generation method, a generation device, a computer device and a storage medium of a movement intention recognition model, wherein a plurality of groups of sample information acquired by wearable equipment are acquired, and each group of sample information comprises an inertia measurement unit signal, a plantar pressure signal and a sample electromyographic signal; in each set of sample information, obtaining a sample joint moment according to the inertia measurement unit signal and the plantar pressure signal; inputting the sample electromyographic signals and the corresponding sample joint moments in each set of sample information into a preset neural network model for training to obtain a movement intention recognition model; thereby overcoming the hysteresis caused by the recognition of the movement intention in the traditional mode, the problems of sweating, muscle fatigue and the like caused by the long-time movement of the muscle, and realizing the intention recognition with high robustness and high accuracy.

Description

Motion intention recognition model generation method, device, equipment and storage medium
Technical Field
The present invention relates to the field of man-machine interaction technologies, and in particular, to a method, an apparatus, a device, and a storage medium for generating a motion intention recognition model.
Background
With the development of sensing technology and digitizing technology, more and more methods are available for detecting human motion gait information. The existing identification method for the movement intention of the human body mainly comprises intention identification based on mechanical information and intention identification based on bioelectricity information. However, the motion intention recognition method using the mechanical information can only be obtained after the user starts to move, has serious hysteresis, cannot directly reflect the motion intention of the person, and is difficult to realize flexible control. The continuous movement of the human body can cause the problems of reduced muscle contractility, sweating of the upper epidermis and the like, so that the accuracy of a prediction result of movement intention is reduced, and the intention recognition based on bioelectric information needs to comprehensively consider the influence of the muscle state of a user on myoelectric information after long-time use.
Disclosure of Invention
The embodiment of the invention provides a method, a device, computer equipment and a storage medium for generating a motion intention recognition model, which are used for solving the problem of low accuracy of a motion intention recognition result.
The embodiment of the invention provides a method, a device, computer equipment and a storage medium for identifying movement intention, which are used for solving the problem of low accuracy of movement intention identification results.
A method of generating a motion intent recognition model, comprising:
acquiring a plurality of groups of sample information acquired by wearable equipment, wherein each group of sample information comprises an inertial measurement unit signal, a plantar pressure signal and a sample electromyographic signal;
in each set of sample information, obtaining a sample joint moment according to the inertia measurement unit signal and the plantar pressure signal;
And inputting the sample electromyographic signals and the corresponding sample joint moments in each set of sample information into a preset neural network model for training to obtain a movement intention recognition model.
A method of motion intent recognition, comprising:
Acquiring an electromyographic signal to be identified, and inputting the electromyographic signal to be identified into a motion intention identification model to identify, so as to obtain a target joint moment, wherein the motion intention identification model is obtained by adopting the motion intention identification model generation method;
and obtaining a movement intention recognition result based on the target joint moment.
A movement intention recognition model generation device comprising:
The system comprises a sample information acquisition module, a wearable device and a wearable device, wherein the sample information acquisition module is used for acquiring a plurality of groups of sample information acquired by the wearable device, and each group of sample information comprises an inertial measurement unit signal, a plantar pressure signal and a sample electromyographic signal;
the sample joint moment obtaining module is used for obtaining a sample joint moment according to the inertia measurement unit signal and the plantar pressure signal in each group of sample information;
The training module is used for inputting the sample electromyographic signals and the corresponding sample joint moments in each set of sample information into a preset neural network model for training to obtain a movement intention recognition model.
An exercise intention recognition device comprising:
The first recognition module is used for acquiring an electromyographic signal to be recognized, inputting the electromyographic signal to be recognized into a motion intention recognition model for recognition to obtain a target joint moment, wherein the motion intention recognition model is obtained by adopting the motion intention recognition model generation method;
and the movement intention recognition result obtaining module is used for obtaining a movement intention recognition result based on the target joint moment.
A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, the processor implementing the above-described movement intention recognition model generation method when executing the computer program or the movement intention recognition method when executing the computer program.
A computer-readable storage medium storing a computer program which, when executed by a processor, implements the above-described movement intention recognition model generation method, or which, when executed by a processor, implements the above-described movement intention recognition method.
According to the method, the device, the computer equipment and the storage medium for generating the movement intention recognition model, a plurality of groups of sample information acquired by the wearable equipment are acquired, and each group of sample information comprises an inertia measurement unit signal, a plantar pressure signal and a sample electromyographic signal; in each set of sample information, obtaining a sample joint moment according to the inertia measurement unit signal and the plantar pressure signal; inputting the sample electromyographic signals and the corresponding sample joint moments in each set of sample information into a preset neural network model for training to obtain a movement intention recognition model; the present embodiment considers the influence of the muscle state of the human body on the electromyographic signal after the long-time repetitive motion. The model training is conducted on the neural network model based on the sample electromyographic signals by calculating the sample joint moment estimation obtained by the inertia measurement unit signals and the plantar pressure signals to obtain the movement intention recognition model, so that the problems of hysteresis caused by recognition of movement intention in a traditional mode, sweating, muscle fatigue and the like caused by long-time movement of muscles are solved, the problem of low accuracy of movement intention recognition results is solved, and intention recognition with high robustness and high accuracy is realized.
The method, the device, the computer equipment and the storage medium for identifying the movement intention are characterized in that the electromyographic signals to be identified are obtained, and the electromyographic signals to be identified are input into a movement intention identification model for identification to obtain the moment of a target joint, wherein the movement intention identification model is obtained by adopting the generation method of the movement intention identification model according to the claims; based on the target joint moment, the movement intention recognition result is obtained, so that the problems of hysteresis caused by recognition of movement intention in a traditional mode, sweating, muscle fatigue and the like caused by long-time movement of muscles are solved, the problem of low accuracy of the movement intention recognition result is solved, and intention recognition with high robustness and high accuracy is realized.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are needed in the description of the embodiments of the present invention will be briefly described below, it being obvious that the drawings in the following description are only some embodiments of the present invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 is a schematic diagram of an application environment of a motion intention recognition model generation method or a motion intention recognition method according to an embodiment of the invention;
FIG. 2 is a diagram showing an example of a method for generating a motion intention recognition model according to an embodiment of the present invention;
FIG. 3 is another exemplary diagram of a method for generating a motion intent recognition model in accordance with an embodiment of the present invention;
FIG. 4 is another exemplary diagram of a method for generating a motion intent recognition model in accordance with an embodiment of the present invention;
FIG. 5 is a schematic block diagram of a movement intention recognition model generating apparatus in an embodiment of the present invention;
FIG. 6 is a diagram illustrating an exemplary method for identifying movement intent in accordance with an embodiment of the present invention;
FIG. 7 is a schematic block diagram of a movement intention recognition device in an embodiment of the invention;
FIG. 8 is a schematic diagram of a computer device in accordance with an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The method for generating the motion intention recognition model provided by the embodiment of the invention can be applied to an application environment shown in fig. 1. Specifically, the system for generating the movement intention recognition model may include a client and a server as shown in fig. 1, where the client and the server communicate through a network, so as to solve the problem of low accuracy of the movement intention recognition result. The client is also called a user end, and refers to a program corresponding to the server end for providing local service for the client. The client may be installed on, but is not limited to, various personal computers, notebook computers, smartphones, tablet computers, and portable wearable devices. The server may be implemented by a stand-alone server or a server cluster formed by a plurality of servers.
In an embodiment, as shown in fig. 2, a method for generating a motion intention recognition model is provided, and the method is applied to the server in fig. 1 for illustration, and includes the following steps:
s11: and acquiring a plurality of groups of sample information acquired by the wearable equipment, wherein each group of sample information comprises an inertial measurement unit signal, a plantar pressure signal and a sample electromyographic signal.
The sample information is information required for training the exercise intention recognition model. Specifically, each set of sample information includes an Inertial Measurement Unit (IMU) signal, a plantar pressure signal, and a sample electromyographic signal. Optionally, the acquiring multiple sets of sample information acquired by the wearable device may be acquiring data acquired by the wearable device in real time, and then taking the acquired sample information as training data, and inputting the training data into a preset neural network model for training, that is, when the wearable device acquires one set of sample information, acquiring one set of sample information from the wearable device in real time for training, and performing model training while acquiring the sample information. Or after acquiring a plurality of groups of sample information acquired in advance by the wearable equipment, inputting the sample information into a preset neural network model for training. The wearable device is a portable device that can be worn directly on the body or integrated into the clothing or accessories of the user.
In a specific application scenario, the wearable device is a device composed of a plurality of sensors, and the medical alcohol can be used for disinfecting the pad to wipe the sensor at a placement position on the muscle, and after stains are removed, the sensor of the wearable device is placed. And attaching electrodes of a sensor of the wearable device to corresponding muscle locations of the left leg. The muscles involved include the rectus, semitendinosus, lateral thigh, biceps femoris, gastrocnemius and tibialis anterior of the thigh, instep and plantar. Specifically, the sensors positioned on the rectus femoris and the tibialis anterior are used for acquiring sample electromyographic signals and inertial measurement unit signals; the sensor positioned at the instep position is used for collecting signals of the inertial measurement unit; the remaining sensors are used to collect sample electromyographic signals and plantar pressure signals. The principle of sensor placement in the wearable equipment is as follows: the sensor is attached and placed clockwise from top to bottom, so that analysis is facilitated. After the wearable equipment is set, a switch of each sensor in the wearable equipment is turned on, related parameters are set, the connection state of the detection electrode and the lower limb of the human body is set, the motion waveform of the electromyographic signals of the lower limb of the human body is observed, the normal work of each sensor in the wearable equipment is ensured, then the acquisition work of the inertial measurement unit signal, the plantar pressure signal and the sample electromyographic signals is carried out, and an acquirer can start to continuously carry out according to a plurality of preset actions.
S12: and in each set of sample information, calculating according to the inertia measurement unit signal and the plantar pressure signal to obtain a sample joint moment.
The sample joint moment refers to joint moment used for model training. Joint moment refers to the tendency of the joint to rotate, such as the knee joint, either anteriorly or posteriorly. The sample joint moment is calculated by an inertia measurement unit signal and a plantar pressure signal. Specifically, in each set of sample information, first, joint trajectories of 3 joints of the hip, the knee and the ankle during the movement of the human body are calculated by using the inertia measurement unit signal and the plantar pressure signal. The calculation process of the joint track mainly comprises the step of recovering the joint movement posture information according to the inertia measurement unit signals and the plantar pressure signals. The obtained articulation gesture information mainly comprises articulation speed, articulation acceleration and articulation position; and substituting the joint movement speed, the joint movement acceleration and the joint position value into an inverse dynamic equation, so as to obtain the sample joint moment. Preferably, in order to improve the accuracy of the generated sample joint moment, after the joint movement speed, the joint movement acceleration and the joint position are obtained, the joint movement speed, the joint movement acceleration, the joint position and plantar pressure signal, the gravity acceleration and the joint mass are substituted into the inverse kinetic equation, so that the sample joint moment is calculated.
In the process of generating the sample joint moment according to the inertia measurement unit signal and the plantar pressure signal, the inertia measurement unit signal and the plantar pressure signal are firstly subjected to numerical conversion, namely the inertia measurement unit signal and the plantar pressure signal are respectively converted into numerical values capable of being directly calculated, and then the numerical values corresponding to the inertia measurement unit signal and the numerical values corresponding to the plantar pressure signal are subjected to calculation processing, so that the sample joint moment is obtained.
S13: and inputting the sample electromyographic signals and the corresponding sample joint moments in each set of sample information into a preset neural network model for training to obtain a movement intention recognition model.
The movement intention recognition model is a model capable of recognizing an electromyographic signal. The joint moment of the electromyographic signals at each moment can be identified through the movement intention identification model. Specifically, after obtaining a sample electromyographic signal and a sample joint moment, taking the sample electromyographic signal in each set of sample information as data to be trained, specifically taking the corresponding sample joint moment as supervision information of the sample electromyographic signal, inputting the sample electromyographic signal and the sample joint moment in each set of sample information into a preset neural network model together for training, estimating a predicted joint moment of the sample electromyographic signal at each moment through the neural network model, comparing the obtained predicted joint moment of the sample electromyographic signal at each moment with the corresponding sample joint moment, and realizing iterative training of the model in the comparing process to obtain the movement intention recognition model.
In this embodiment, by acquiring multiple sets of sample information acquired by the wearable device, each set of sample information includes an inertial measurement unit signal, a plantar pressure signal, and a sample electromyographic signal; in each set of sample information, obtaining a sample joint moment according to the inertia measurement unit signal and the plantar pressure signal; inputting the sample electromyographic signals and the corresponding sample joint moments in each set of sample information into a preset neural network model for training to obtain a movement intention recognition model; the present embodiment considers the influence of the muscle state of the human body on the electromyographic signal after the long-time repetitive motion. The model training is conducted on the neural network model based on the sample electromyographic signals by calculating the sample joint moment estimation obtained by the inertia measurement unit signals and the plantar pressure signals to obtain the movement intention recognition model, so that the problems of hysteresis caused by recognition of movement intention in a traditional mode, sweating, muscle fatigue and the like caused by long-time movement of muscles are solved, the problem of low accuracy of movement intention recognition results is solved, and intention recognition with high robustness and high accuracy is realized.
In one embodiment, as shown in fig. 3, after the sample myoelectric signal and the sample joint moment are input into a preset neural network model for training, and a movement intention recognition model is generated, the movement intention recognition model generating method further specifically includes the following steps:
S14: and acquiring the electromyographic signals to be verified and the corresponding preset standard joint moment in real time, inputting the electromyographic signals to be verified into a movement intention recognition model for recognition, and obtaining the joint moment to be verified.
The electromyographic signal to be verified refers to the electromyographic signal to be verified. The joint moment to be verified refers to the joint moment obtained after the joint moment is identified by adopting the movement intention identification model. The preset standard joint moment refers to a standard joint moment obtained by testing the electromyographic signals to be verified in other modes in advance. Specifically, the preset standard joint moment at the moment can be obtained by collecting human electromyographic signals, inertial measurement unit signals and plantar pressure signals in real time and then performing inverse dynamics calculation according to the inertial measurement unit signals and plantar pressure signals. Specifically, the electromyographic signals to be verified and corresponding preset standard joint moments during human body movement acquired by the wearable equipment are acquired in real time, and the acquired electromyographic signals to be verified are input into a movement intention recognition model for recognition, so that the joint moments to be verified can be obtained.
S15: and calculating the joint moment to be verified and a loss function of the corresponding preset standard joint moment.
S16: if the loss function does not meet the preset value, performing iterative training on the motion intention recognition model based on the loss function, so that the loss function of the verification joint moment and the corresponding preset standard joint moment is continuously reduced until the loss function of the verification joint moment and the preset standard joint moment is smaller than the preset value.
The loss function is a function for indicating a difference value between the joint moment to be verified and the corresponding preset standard joint moment. The preset value refers to a preset value for evaluating whether the exercise intention recognition model reaches balance. In this embodiment, the preset value is preferably 0.
Specifically, after the joint moment to be verified is obtained, the joint moment to be verified is compared with the corresponding preset standard joint moment in a difference mode, and a loss function of the joint moment to be verified and the corresponding preset standard joint moment is calculated. If the loss function does not meet the preset value, the motion intention recognition model is adjusted in real time by optimizing the loss function in the model training process. In this embodiment, the real-time adjustment of the motion intention recognition model is achieved by iterative training of the motion intention recognition model based on an optimization loss function. Specifically, when the loss functions of the joint moment to be verified and the corresponding preset standard joint moment are calculated to not meet the preset value, the joint moment to be verified and the corresponding preset standard joint moment are re-acquired to train the movement intention recognition model, and the re-acquired loss functions of the joint moment to be verified and the corresponding preset standard joint moment are calculated to circulate in sequence, so that the loss functions of the joint moment to be verified and the corresponding preset standard joint moment are gradually reduced until the loss functions are smaller than the preset value. In the step, the motion intention recognition model is continuously corrected, so that the difference value between the joint moment to be verified and the corresponding preset standard joint moment is continuously reduced, namely, the loss function of the joint moment to be verified and the corresponding preset standard joint moment is smaller than the preset value, and the motion intention recognition model is balanced, so that the motion intention recognition model based on online learning at the current moment is corrected. It should be noted that, in the present embodiment, in the process of performing iterative training on the motion intention recognition model based on the loss function, the iterative training is implemented by continuously inputting new sample data (the joint moment to be verified and the corresponding preset standard joint moment), where the online learning algorithm includes, but is not limited to, an online gradient descent algorithm.
In the embodiment, the electromyographic signals to be verified and the corresponding preset standard joint moments are obtained in real time, the electromyographic signals to be verified are input into a movement intention recognition model for recognition, and the joint moments to be verified are obtained; acquiring an electromyographic signal to be verified and a corresponding preset standard joint moment in real time, inputting the electromyographic signal to be verified into a motion intention recognition model for recognition, and obtaining the joint moment to be verified; if the loss function does not meet the preset requirement, performing iterative training on the motion intention recognition model based on the loss function, so that the difference value between the verification joint moment and the corresponding preset standard joint moment is continuously reduced until the difference value between the verification joint moment and the preset standard joint moment is smaller than the preset value; the recognition accuracy of the motion intention recognition model is continuously improved through online learning of the motion intention recognition model, so that the trained motion intention recognition model is more in line with the muscle characteristics of a human body under long-time motion.
In one embodiment, as shown in fig. 4, the calculation is performed according to the inertia measurement unit signal and the plantar pressure signal to obtain a sample joint moment, and the method specifically includes the following steps:
S121: and converting the inertia measurement unit signal and the plantar pressure signal to obtain articulation data, wherein the articulation data comprises articulation speed, articulation acceleration and articulation position.
Specifically, in order to obtain accurate joint motion data of each joint of a human body, the human body posture is initialized and calibrated first, and then real-time tracking of the human body motion posture is performed. In one embodiment, the process of initializing the calibration is the process of determining the gesture rotation matrix, since the relative positional relationship between the sensors can be considered to be fixed after they are bound to the limb. The gesture rotation matrix corresponds to the conversion relation between an inertial coordinate system and a human joint coordinate system, and the real-time tracking corresponds to the real-time updating of the conversion relation between the human joint motion coordinate systems. The test personnel can complete the calibration only by standing for about 10 seconds. In the process of calibration, corresponding inertial measurement unit signals and plantar pressure signals are obtained according to the initial motion gesture of the sensor after the sensor is fixedly connected to the human body; the inertial measurement unit signal and plantar pressure signal are then converted to obtain articulation data including articulation velocity, articulation acceleration and joint position.
In the embodiment, the waist sensor is used as a reference root node, after the inertial measurement unit signal and the plantar pressure signal are acquired, the posture value of each joint is calculated by traversing each joint, then the obtained posture value is converted into a joint coordinate system corresponding to a human body model, the tracking of the motion posture of the human body is completed, and after the motion posture of the human body, namely the joint angle is obtained, the joint motion data are respectively obtained through a first-order difference and a second-order difference. The articulation data includes articulation velocity (joint angular velocity) and articulation acceleration (joint angular acceleration) and joint position.
S122: substituting the joint movement speed, the joint movement acceleration and the joint position into an inverse dynamics equation for calculation to obtain a sample joint moment.
Specifically, after the joint movement speed, the joint movement acceleration and the joint position are obtained, the joint movement speed, the joint movement acceleration and the joint position are substituted into an inverse kinetic equation, and each joint moment is solved, so that a sample joint moment is obtained.
In the embodiment, the inertia measurement unit signal and the plantar pressure signal are converted to obtain articulation data, wherein the articulation data comprise articulation speed, articulation acceleration and articulation position; substituting the joint movement speed, the joint movement acceleration and the joint position into an inverse dynamics equation for calculation to obtain a sample joint moment; thereby improving the accuracy of the generated sample joint moment.
In an embodiment, before the sample electromyographic signals and the sample joint moments are input into the preset neural network model for training, the movement intention recognition model generation method further comprises:
and preprocessing the sample electromyographic signals by adopting a preset strategy.
Specifically, in order to further improve accuracy of subsequent model training, in this embodiment, before the sample electromyographic signals are input into a preset neural network model for training, the sample electromyographic signals need to be preprocessed in advance by adopting a preset strategy. In this embodiment, the preset strategy is preferably to pre-process the sample electromyographic signal by adopting a method of extracting root mean square RMS characteristics. The method for extracting the Root Mean Square (RMS) features is a method for extracting features of information. Specifically, the sample electromyographic signals are squared, averaged and then squared by adopting a method for extracting Root Mean Square (RMS) characteristics, so that root mean square values of the sample electromyographic signals are obtained, and finally the root mean square values of the sample electromyographic signals are input into a preset neural network model for training, so that the accuracy of model training is further improved.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
In an embodiment, there is provided a movement intention recognition model generating device in one-to-one correspondence with the movement intention recognition model generating method in the above embodiment. As shown in fig. 5, the movement intention recognition model generation device includes a sample information acquisition module 11, a sample joint moment obtaining module 12, and a training module 13. The functional modules are described in detail as follows:
A sample information obtaining module 11, configured to obtain a plurality of sets of sample information collected by the wearable device, where each set of sample information includes an inertial measurement unit signal, a plantar pressure signal, and a sample electromyographic signal;
A sample joint moment obtaining module 12, configured to obtain a sample joint moment according to the inertia measurement unit signal and the plantar pressure signal in each set of sample information;
The training module 13 is configured to input the sample electromyographic signals and the corresponding sample joint moments in each set of sample information into a preset neural network model for training, so as to obtain a movement intention recognition model.
Preferably, the movement intention recognition model generating device further includes:
The second recognition module is used for acquiring the electromyographic signals to be verified and the corresponding preset standard joint moment in real time, inputting the electromyographic signals to be verified into the movement intention recognition model for recognition, and obtaining the joint moment to be verified;
The calculation module is used for calculating the joint moment to be verified and the loss function of the corresponding preset standard joint moment;
And the iterative training module is used for performing iterative training on the motion intention recognition model based on the loss function when the loss function does not meet a preset value, so that the loss function of the verification joint moment and the corresponding preset standard joint moment is continuously reduced until the loss functions of the verification joint moment and the preset standard joint moment are smaller than the preset value.
Preferably, the sample joint moment deriving module 12 comprises:
the conversion unit is used for converting the inertia measurement unit signal and the plantar pressure signal to obtain articulation data, wherein the articulation data comprise articulation speed, articulation acceleration and articulation position;
And the calculation unit is used for substituting the joint movement speed, the joint movement acceleration and the joint position into an inverse dynamics equation to calculate so as to obtain a sample joint moment.
Preferably, the movement intention recognition model generating device further includes:
And the preprocessing module is used for preprocessing the sample electromyographic signals by adopting a preset strategy.
For the specific definition of the movement intention recognition model generating means, reference may be made to the definition of the movement intention recognition model generating method hereinabove, and the description thereof will not be repeated here. The respective modules in the above-described movement intention recognition model generating apparatus may be implemented in whole or in part by software, hardware, and a combination thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
The embodiment of the invention also provides a movement intention recognition method which can be applied to the application environment shown in fig. 1. Specifically, the motion intention recognition method is applied to a motion intention recognition system, the motion intention recognition system comprises a client and a server as shown in fig. 1, and the client and the server communicate through a network to solve the problem of low accuracy of motion intention recognition results. The client is also called a user end, and refers to a program corresponding to the server end for providing local service for the client. The client may be installed on, but is not limited to, various personal computers, notebook computers, smartphones, tablet computers, and portable wearable devices. The server may be implemented by a stand-alone server or a server cluster formed by a plurality of servers.
In an embodiment, as shown in fig. 6, a method for identifying exercise intention is provided, and the method is applied to the server in fig. 1 for illustration, and includes the following steps:
s21: the method comprises the steps of obtaining an electromyographic signal to be identified, inputting the electromyographic signal to be identified into a motion intention identification model for identification, and obtaining a target joint moment, wherein the motion intention identification model is obtained by adopting the motion intention identification model generation method.
The electromyographic signals to be identified refer to electromyographic signals to be identified. Specifically, the electromyographic signal to be identified may be an electromyographic signal acquired by the wearable device in real time, or may be an electromyographic signal acquired in advance from the wearable device. The target joint moment is the joint moment obtained after the electromyographic signals to be identified are identified by adopting the movement intention identification model. Joint moment refers to the tendency of the joint to rotate, such as the knee joint, either anteriorly or posteriorly.
Specifically, after the electromyographic signals to be identified are obtained, the electromyographic signals to be identified are input into a movement intention identification model for identification, and the target joint moment is obtained, wherein the movement intention identification model is obtained by adopting the movement intention identification model generation method in the claims.
S22: and obtaining a movement intention recognition result based on the target joint moment.
Specifically, the motion angle of the joint at each moment, such as the forward flexion degree of the hip joint and the backward extension degree of the knee joint, can be obtained by identifying the target joint moment output by the model through the motion intention. In addition, the preset joint types are combined, and the joint track of the human body movement is established, so that the movement intention recognition result is obtained, and further movement intention recognition is realized. In a specific embodiment, taking walking as an example, the change trend of the myoelectric signals on the surface of the muscle and the change of the joint angles (reflecting the buckling and stretching actions of each joint) can be decomposed into two stages of leg swinging and supporting, wherein the leg swinging stage refers to the process that the leg leaves the ground and swings forwards, and the stage is accompanied with the process that the hip joint and the knee joint buckle first and then stretch; the support phase refers to the process of the legs contacting the ground, supporting the body in balance and providing walking power, and is primarily operative as hip joint extension. Once the joint track output by the motion intention recognition model is similar to the walking characteristics of the human body, the motion intention type can be judged to be walking.
In this embodiment, an electromyographic signal to be identified is obtained, and the electromyographic signal to be identified is input into a motion intention identification model to be identified, so as to obtain a target joint moment, wherein the motion intention identification model is obtained by adopting the motion intention identification model generation method described in the claims; based on the target joint moment, a movement intention recognition result is obtained, so that the defect that the recognition accuracy is reduced due to the fact that movement intention recognition based on electromyographic signals is influenced by different fatigue states and fatigue degrees in a traditional mode is overcome, the problem of low accuracy of the movement intention recognition result is solved, and intention recognition with high robustness and high accuracy is realized.
In an embodiment, the target joint moment includes an action label, and the obtaining the movement intention recognition result based on the target joint moment includes: and determining the action category of the electromyographic signals to be identified at the corresponding moment according to the change trend of the moment of the target joint and the action label, and generating a movement intention identification result.
The action labeling refers to action categories defined in advance according to actual conditions, such as forward movement of the knee to knee flexion. The change trend of the target joint moment refers to the change trend of joint rotation. For example: from knee joint anteriorly to knee joint posteriorly. In particular, the moment of the same joint will have the same trend of variation due to the same action. Therefore, in the step, the action category corresponding to the electromyographic signal to be identified in the time period can be judged according to the change trend of the moment of the target joint and the action label.
In this embodiment, according to the change trend of the target joint moment and the action label, determining the action category of the electromyographic signal to be identified at the corresponding moment mainly includes: labeling the electromyographic signals to be identified at the same moment according to action labeling of the target joint moment to obtain a joint moment label corresponding to the electromyographic signals to be identified at the moment, determining a joint track of the electromyographic signals to be identified according to the change trend of the target joint moment and the joint moment label corresponding to the electromyographic signals to be identified, and finally comparing the joint track of the electromyographic signals to be identified with preset human walking characteristics to obtain action types of the electromyographic signals to be identified at the corresponding moment. For example: if the joint track of the obtained electromyographic signal to be identified is similar to the walking characteristics of the human body, the action type of the electromyographic signal to be identified can be judged to be walking. The advantage of marking the electromyographic signals by using the joint moment is that: because of the complexity of the electromyographic signals, the traditional labeling mode adopts a mode of extracting signal characteristics, and a human body movement intention recognition model is constructed by carrying out a series of operations such as characteristic extraction and the like on the signal characteristics, but the influence of the muscle state of the human body on the electromyographic signals after long-time repeated actions cannot be comprehensively considered in a model test stage; at this time, due to the reduced excitability and contractility of the muscles, the collected myoelectric data is not suitable for the trained model, and the accuracy of the motion prediction result is reduced. According to the embodiment, the joint force conditions are obtained by calculating the joint moment, so that the start and stop moments of the corresponding motion signals are determined, accurate marking of the signals can be achieved, and the influence of individual differences is avoided.
It should be understood that the sequence number of each step in the foregoing embodiment does not mean that the execution sequence of each process should be determined by the function and the internal logic, and should not limit the implementation process of the embodiment of the present invention.
In one embodiment, there is provided a movement intention recognition device in one-to-one correspondence with the movement intention recognition method in the above embodiment. As shown in fig. 7, the movement intention recognition device includes a first recognition module 21 and a movement intention recognition result obtaining module 22. The functional modules are described in detail as follows:
The first recognition module 21 is configured to obtain an electromyographic signal to be recognized, input the electromyographic signal to be recognized into a motion intention recognition model, and perform recognition to obtain a target joint moment, where the motion intention recognition model is obtained by using the motion intention recognition model generation method described above;
And the movement intention recognition result obtaining module 22 is configured to obtain a movement intention recognition result based on the target joint moment.
Preferably, the movement intention recognition result obtaining module 22 includes:
and the action category determining unit is used for determining the action category of the electromyographic signal to be identified at the corresponding moment according to the change trend of the target joint moment and the action label and generating a movement intention identification result.
For specific limitations on the movement intention recognition means, reference may be made to the above limitations on the movement intention recognition method, and no further description is given here. The respective modules in the above-described movement intention recognition apparatus may be implemented in whole or in part by software, hardware, and combinations thereof. The above modules may be embedded in hardware or may be independent of a processor in the computer device, or may be stored in software in a memory in the computer device, so that the processor may call and execute operations corresponding to the above modules.
In one embodiment, a computer device is provided, which may be a server, and the internal structure of which may be as shown in fig. 8. The computer device includes a processor, a memory, a network interface, and a database connected by a system bus. Wherein the processor of the computer device is configured to provide computing and control capabilities. The memory of the computer device includes a non-volatile storage medium and an internal memory. The non-volatile storage medium stores an operating system, computer programs, and a database. The internal memory provides an environment for the operation of the operating system and computer programs in the non-volatile storage media. The database of the computer device is used for storing the movement intention recognition model generation method in the above embodiment and the data used in the movement intention recognition method. The network interface of the computer device is used for communicating with an external terminal through a network connection. The computer program when executed by the processor to implement a method of generating a movement intention recognition model, or the computer program when executed by the processor to implement a method of movement intention recognition.
In one embodiment, a computer device is provided that includes a memory, a processor, and a computer program stored on the memory and executable on the processor, where the processor implements the movement intention recognition model generation method in the above embodiment when executing the computer program, or implements the movement intention recognition method in the above embodiment when executing the computer program.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which when executed by a processor implements the movement intention recognition model generation method in the above embodiment, or which when executed by a processor implements the movement intention recognition method in the above embodiment.
Those skilled in the art will appreciate that implementing all or part of the above described methods may be accomplished by way of a computer program stored on a non-transitory computer readable storage medium, which when executed, may comprise the steps of the embodiments of the methods described above. Any reference to memory, storage, database, or other medium used in embodiments provided herein may include non-volatile and/or volatile memory. The nonvolatile memory can include Read Only Memory (ROM), programmable ROM (PROM), electrically Programmable ROM (EPROM), electrically Erasable Programmable ROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM) or external cache memory. By way of illustration and not limitation, RAM is available in a variety of forms such as Static RAM (SRAM), dynamic RAM (DRAM), synchronous DRAM (SDRAM), double Data Rate SDRAM (DDRSDRAM), enhanced SDRAM (ESDRAM), synchronous link (SYNCHLINK) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic RAM (DRDRAM), and memory bus dynamic RAM (RDRAM), among others.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-described division of the functional units and modules is illustrated, and in practical application, the above-described functional distribution may be performed by different functional units and modules according to needs, i.e. the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-described functions.
The above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (10)

1. A method for generating a motion intention recognition model, comprising:
Acquiring a plurality of groups of sample information acquired by wearable equipment, wherein each group of sample information comprises an inertial measurement unit signal, a plantar pressure signal and a sample electromyographic signal, the wearable equipment is equipment composed of a plurality of sensors, the sensors attached to rectus femoris are used for acquiring the sample electromyographic signals and the inertial measurement unit signals, the sensors attached to tibialis anterior are used for acquiring the sample electromyographic signals and the inertial measurement unit signals, and the sensors attached to instep positions are used for acquiring the inertial measurement unit signals;
in each set of sample information, obtaining a sample joint moment according to the inertia measurement unit signal and the plantar pressure signal;
And inputting the sample electromyographic signals and the corresponding sample joint moments in each set of sample information into a preset neural network model for training to obtain a movement intention recognition model, wherein the corresponding sample joint moments are used as supervision information of the sample electromyographic signals.
2. The movement intention recognition model generation method according to claim 1, wherein after the training of inputting the sample electromyographic signals and the sample joint moments into a preset neural network model to generate a movement intention recognition model, the movement intention recognition model generation method further comprises:
acquiring the electromyographic signals to be verified and the corresponding preset standard joint moment in real time,
Inputting the electromyographic signals to be verified into the movement intention recognition model for recognition to obtain joint moment to be verified;
Calculating the joint moment to be verified and the loss function of the corresponding preset standard joint moment;
If the loss function does not meet a preset value, performing iterative training on the motion intention recognition model based on the loss function, so that the loss functions of the verification joint moment and the corresponding preset standard joint moment are continuously reduced until the loss functions of the verification joint moment and the corresponding preset standard joint moment are smaller than the preset value.
3. The method for generating a motion intention recognition model according to claim 1, wherein the obtaining a sample joint moment from the inertia measurement unit signal and the plantar pressure signal includes:
Converting the inertia measurement unit signal and the plantar pressure signal to obtain articulation data, wherein the articulation data comprises articulation speed, articulation acceleration and articulation position;
Substituting the joint movement speed, the joint movement acceleration and the joint position into an inverse dynamics equation for calculation to obtain a sample joint moment.
4. The movement intention recognition model generation method according to claim 1, wherein the movement intention recognition model generation method further comprises, before inputting the sample electromyographic signals and the sample joint moments into a preset neural network model for training:
And preprocessing the sample electromyographic signals by adopting a preset strategy.
5. A method for identifying an exercise intention, comprising:
acquiring an electromyographic signal to be identified, and inputting the electromyographic signal to be identified into a motion intention identification model to identify to obtain a target joint moment, wherein the motion intention identification model is obtained by adopting the motion intention identification model generation method according to any one of claims 1-4;
and obtaining a movement intention recognition result based on the target joint moment.
6. The method for recognizing motion intention according to claim 5, wherein the target joint moment includes an action label, and the step of obtaining a motion intention recognition result based on the target joint moment includes:
and determining the action category of the electromyographic signals to be identified at the corresponding moment according to the change trend of the moment of the target joint and the action label, and generating a movement intention identification result.
7. A movement intention recognition model generation device, characterized by comprising:
The system comprises a sample information acquisition module, a wearable device, a foot rest position sensor and an electronic control module, wherein the sample information acquisition module is used for acquiring a plurality of groups of sample information acquired by the wearable device, each group of sample information comprises an inertial measurement unit signal, a plantar pressure signal and a sample electromyographic signal, the wearable device is composed of a plurality of sensors, the sensor attached to rectus femoris is used for acquiring the sample electromyographic signal and the inertial measurement unit signal, the sensor attached to tibialis anterior is used for acquiring the sample electromyographic signal and the inertial measurement unit signal, and the sensor attached to the instep position is used for acquiring the inertial measurement unit signal;
the sample joint moment obtaining module is used for obtaining a sample joint moment according to the inertia measurement unit signal and the plantar pressure signal in each group of sample information;
The training module is used for inputting the sample electromyographic signals and the corresponding sample joint moments in each set of sample information into a preset neural network model for training to obtain a movement intention recognition model, wherein the corresponding sample joint moments are used as supervision information of the sample electromyographic signals.
8. An exercise intention recognition apparatus, comprising:
The first recognition module is used for acquiring an electromyographic signal to be recognized, inputting the electromyographic signal to be recognized into a motion intention recognition model for recognition to obtain a target joint moment, wherein the motion intention recognition model is obtained by adopting the motion intention recognition model generation method according to any one of claims 1-4;
and the movement intention recognition result obtaining module is used for obtaining a movement intention recognition result based on the target joint moment.
9. A computer device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the movement intention recognition model generation method according to any one of claims 1 to 4 when executing the computer program or the movement intention recognition method according to any one of claims 5 to 6 when the processor executes the computer program.
10. A computer-readable storage medium storing a computer program, characterized in that the computer program implements the movement intention recognition model generation method according to any one of claims 1 to 4 when executed by a processor, or the movement intention recognition method according to any one of claims 5 to 6 when executed by a processor.
CN202010229806.3A 2020-03-27 2020-03-27 Motion intention recognition model generation method, device, equipment and storage medium Active CN113515967B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010229806.3A CN113515967B (en) 2020-03-27 2020-03-27 Motion intention recognition model generation method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010229806.3A CN113515967B (en) 2020-03-27 2020-03-27 Motion intention recognition model generation method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113515967A CN113515967A (en) 2021-10-19
CN113515967B true CN113515967B (en) 2024-05-14

Family

ID=78060097

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010229806.3A Active CN113515967B (en) 2020-03-27 2020-03-27 Motion intention recognition model generation method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113515967B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114028775B (en) * 2021-12-08 2022-10-14 福州大学 Ankle joint movement intention identification method and system based on sole pressure
CN116766207B (en) * 2023-08-02 2024-05-28 中国科学院苏州生物医学工程技术研究所 Robot control method based on multi-mode signal motion intention recognition

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103431976A (en) * 2013-07-19 2013-12-11 燕山大学 Lower limb rehabilitation robot system based on myoelectric signal feedback, and control method thereof
JP2016187626A (en) * 2016-07-27 2016-11-04 ソニー株式会社 Multilink system, control method, and computer program
CN107397649A (en) * 2017-08-10 2017-11-28 燕山大学 A kind of upper limbs exoskeleton rehabilitation robot control method based on radial base neural net
CN108785997A (en) * 2018-05-30 2018-11-13 燕山大学 A kind of lower limb rehabilitation robot Shared control method based on change admittance
CN109259739A (en) * 2018-11-16 2019-01-25 西安交通大学 A kind of myoelectricity estimation method of wrist joint motoring torque
CN109940584A (en) * 2019-03-25 2019-06-28 杭州程天科技发展有限公司 The detection method that a kind of exoskeleton robot and its detection human motion are intended to
CN110303471A (en) * 2018-03-27 2019-10-08 清华大学 Assistance exoskeleton control system and control method
CN110801226A (en) * 2019-11-01 2020-02-18 西安交通大学 Human knee joint moment testing system method based on surface electromyographic signals and application

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4549758B2 (en) * 2004-06-30 2010-09-22 本田技研工業株式会社 Exercise measurement method, exercise measurement device, and exercise measurement program
JP4178187B2 (en) * 2005-01-26 2008-11-12 国立大学法人 筑波大学 Wearable motion assist device and control program
KR101680740B1 (en) * 2015-08-31 2016-11-30 한국과학기술연구원 Recognition method of human walking speed intention from surface electromyogram signals of plantar flexor and walking speed control method of a lower-limb exoskeleton robot

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103431976A (en) * 2013-07-19 2013-12-11 燕山大学 Lower limb rehabilitation robot system based on myoelectric signal feedback, and control method thereof
JP2016187626A (en) * 2016-07-27 2016-11-04 ソニー株式会社 Multilink system, control method, and computer program
CN107397649A (en) * 2017-08-10 2017-11-28 燕山大学 A kind of upper limbs exoskeleton rehabilitation robot control method based on radial base neural net
CN110303471A (en) * 2018-03-27 2019-10-08 清华大学 Assistance exoskeleton control system and control method
CN108785997A (en) * 2018-05-30 2018-11-13 燕山大学 A kind of lower limb rehabilitation robot Shared control method based on change admittance
CN109259739A (en) * 2018-11-16 2019-01-25 西安交通大学 A kind of myoelectricity estimation method of wrist joint motoring torque
CN109940584A (en) * 2019-03-25 2019-06-28 杭州程天科技发展有限公司 The detection method that a kind of exoskeleton robot and its detection human motion are intended to
CN110801226A (en) * 2019-11-01 2020-02-18 西安交通大学 Human knee joint moment testing system method based on surface electromyographic signals and application

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
一种基于肌动信号的股四头肌收缩力量估计方法研究;王大庆;郭伟斌;吴海峰;高理富;;传感技术学报;20181115(第11期);全文 *

Also Published As

Publication number Publication date
CN113515967A (en) 2021-10-19

Similar Documents

Publication Publication Date Title
Salarian et al. A novel approach to reducing number of sensing units for wearable gait analysis systems
Yang et al. Classification of multiple finger motions during dynamic upper limb movements
CN113515967B (en) Motion intention recognition model generation method, device, equipment and storage medium
Zhang et al. Lower-limb joint torque prediction using LSTM neural networks and transfer learning
Chen et al. A computational framework for quantitative evaluation of movement during rehabilitation
Wang et al. sEMG-based consecutive estimation of human lower limb movement by using multi-branch neural network
Lee et al. Abnormal gait recognition using 3D joint information of multiple Kinects system and RNN-LSTM
Mendoza-Crespo et al. An adaptable human-like gait pattern generator derived from a lower limb exoskeleton
Siu et al. A neural network estimation of ankle torques from electromyography and accelerometry
Liu et al. EMG-driven model-based knee torque estimation on a variable impedance actuator orthosis
CN112949676A (en) Self-adaptive motion mode identification method of flexible lower limb assistance exoskeleton robot
Mallikarjuna et al. Feedback-based gait identification using deep neural network classification
JP2019084130A (en) Walking motion evaluation apparatus, walking motion evaluation method, and program
CN110400618A (en) A kind of three-dimensional gait generation method based on human motion structure feature
Gopalakrishnan et al. A novel computational framework for deducing muscle synergies from experimental joint moments
López-Delis et al. Continuous estimation prediction of knee joint angles using fusion of electromyographic and inertial sensors for active transfemoral leg prostheses
Zhao et al. Multimodal sensing in stroke motor rehabilitation
CN112115964A (en) Acceleration labeling model generation method, acceleration labeling method, device and medium
Jawed et al. Rehabilitation posture correction using neural network
Nutakki et al. Classifying gait features for stance and swing using machine learning
Sivakumar et al. ANN for gait estimations: a review on current trends and future applications
Delgado et al. Estimation of joint angle from sEMG and inertial measurements based on deep learning approach
Hsieh et al. A wearable walking monitoring system for gait analysis
KR20160023981A (en) A sEMG Signal based Gait Phase Recognition Method selecting Features and Channels Adaptively
CN112115813A (en) Human body electromyographic signal labeling method and device and computing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant