CN112807001B - Multi-step intention recognition and motion prediction method, system, terminal and medium - Google Patents

Multi-step intention recognition and motion prediction method, system, terminal and medium Download PDF

Info

Publication number
CN112807001B
CN112807001B CN201911120066.3A CN201911120066A CN112807001B CN 112807001 B CN112807001 B CN 112807001B CN 201911120066 A CN201911120066 A CN 201911120066A CN 112807001 B CN112807001 B CN 112807001B
Authority
CN
China
Prior art keywords
electromyographic
value
signals
state
joint angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911120066.3A
Other languages
Chinese (zh)
Other versions
CN112807001A (en
Inventor
段有康
陈小刚
桂剑
马斌
赵婷
崔毅
沈芸
李顺芬
宋志棠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhongyan Jiuyi Technology Co ltd
Original Assignee
Shanghai Zhongyan Jiuyi Technology Co ltd
Filing date
Publication date
Application filed by Shanghai Zhongyan Jiuyi Technology Co ltd filed Critical Shanghai Zhongyan Jiuyi Technology Co ltd
Priority to CN201911120066.3A priority Critical patent/CN112807001B/en
Publication of CN112807001A publication Critical patent/CN112807001A/en
Application granted granted Critical
Publication of CN112807001B publication Critical patent/CN112807001B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The application provides a multi-step intention recognition and motion prediction method, a system, a terminal and a medium, wherein the method comprises the following steps: collecting myoelectric signals and various joint angle values of the legs of a wearer under different gaits, and preprocessing the myoelectric signals and the joint angle values; extracting gait feature vectors according to the preprocessed electromyographic signals and the angle values of all joints; constructing a motion intention recognition model according to the gait feature vector for predicting motion intention; and/or constructing a joint angle recognition model according to the gait feature vector for predicting the joint angle. The human-computer interaction system solves the defect that a human-computer interaction system which depends on a physical sensor in the prior art cannot continuously identify and predict in real time due to certain hysteresis, and the problem that the identification and the prediction of the electromyographic signals aiming at the lower limb movement far lag behind the identification and the prediction of the movement pattern of the upper limb are solved on the aspect of the electromyographic signals which are advanced to the human body movement. The application accurately classifies and predicts several typical gait of the person, so that the walking mode can be timely identified, and the motion trail can be continuously identified and predicted in real time.

Description

Multi-step intention recognition and motion prediction method, system, terminal and medium
Technical Field
The application relates to the field of robots, in particular to a multi-step intention recognition and motion prediction method, a multi-step intention recognition and motion prediction system, a multi-step intention recognition terminal and a multi-step intention prediction medium.
Background
The lower limb exoskeleton assisting robot is a human body assisting mechanical device capable of identifying the movement state of lower limbs of a human body, providing assistance and enhancing the capability of the human body. The exoskeleton system is a man-machine coupling device of a person in an inner ring, and is used for sensing the movement of the person and assisting the movement of the person, so that the movement of the person is inevitably identified and predicted. In the sensing mode of the robot, a force sensor, a position sensor and the like generate a complex sensing system, so that a user can feel a certain degree of unnaturalness, and a natural delay phenomenon exists between a nerve signal and an action due to the fact that 100ms delay exists between the actual action of a human body and the nerve signal. Skeletal muscle is a power source for driving limbs of a person to move, the activity of the muscle also corresponds to the activity of the limbs, and surface electromyographic signals reflecting the muscle activity state have become an important means of man-machine interaction due to the advantages of convenience in acquisition and surface noninvasive performance.
The hip joint, the knee joint and the ankle joint of the lower limb are the most important joints for human movement, and have great contribution to the flexibility and balance of the human body. Currently, exoskeleton rehabilitation or power-assisted robots applied to lower limbs mostly use plantar pressure, interaction force between the robot and other sensors arranged at joint positions to acquire motion information of a human body. For example, a HAL exoskeleton power-assisted robot of the university of tsukuba in Japan uses a plantar pressure sensor to obtain information of plantar pressure change in a walking state to judge the walking phase of a human body. However, the man-machine interaction system which depends on the physical sensor cannot continuously identify and predict in real time due to a certain hysteresis problem, and the fact that the identification and prediction of the electromyographic signals aiming at the lower limb movement are far behind the identification and prediction of the movement pattern of the upper limb in the electromyographic signals which are advanced to the human body movement is more so in the aspect of continuous prediction.
Content of the application
In view of the above-mentioned drawbacks of the prior art, the present application is directed to a multi-step intent recognition and motion prediction method, system, terminal and medium, for solving the problem that a human-computer interaction system relying on a physical sensor in the prior art cannot recognize and predict continuously in real time due to a certain hysteresis problem, and for recognizing and predicting a motion pattern of a lower limb far behind that of an upper limb by using an electromyographic signal in terms of the electromyographic signal advancing the human motion.
To achieve the above and other related objects, the present application provides a multi-step intent recognition and motion prediction method, comprising: collecting myoelectric signals and various joint angle values of the legs of a wearer under different gaits, and preprocessing the myoelectric signals and the joint angle values; extracting gait feature vectors according to the preprocessed electromyographic signals and the angle values of all joints; constructing a motion intention recognition model according to the gait feature vector for predicting motion intention; and/or constructing a joint angle recognition model according to the gait feature vector for predicting the joint angle.
In one embodiment of the application, the method comprises: the gait feature vector comprises: and the electromyographic signal integral value of each acquisition channel of each frame of data in the electromyographic signals, the electromyographic signal state value of the electromyographic signals of each acquisition channel and the angle values of the three joints of the right leg corresponding to the first sampling point in the frame of data.
In an embodiment of the present application, the integrated value and the myoelectric state value are used to train a gaussian kernel support vector machine, so as to construct the movement intention recognition model.
In an embodiment of the present application, the integrated value and the myoelectricity state value are used to train a linear kernel support vector machine, so as to construct the joint angle recognition model.
In an embodiment of the present application, preprocessing the electromyographic signal and the angle values of each joint includes: denoising the electromyographic signals and smoothing the angle values.
In an embodiment of the application, the electromyographic signal state is related to the electromyographic signal integrated value and the zero crossing number.
To achieve the above and other related objects, the present application provides a multi-step intent recognition and motion prediction system comprising: the preprocessing module is used for acquiring myoelectric signals of the legs of a wearer under different gaits and angle values of all joints and preprocessing the myoelectric signals; the feature vector acquisition module is coupled with the preprocessing module and is used for extracting gait feature vectors according to the preprocessed electromyographic signals and the angle values of all joints; the prediction module is coupled with the feature vector acquisition module and is used for constructing a motion intention recognition model according to the gait feature vector so as to be used for predicting motion intention; and/or constructing a joint angle recognition model according to the gait feature vector for predicting the joint angle.
In one embodiment of the present application, the gait feature vector comprises: the integral value of the electromyographic signals of each acquisition channel of one frame of data, the state value of the electromyographic signals of each acquisition channel and the angle values of the three joints of the right leg corresponding to the first sampling point in the frame of data.
To achieve the above and other related objects, the present application provides a multi-step intention recognition and motion prediction terminal comprising: a memory for storing a computer program; and a processor for executing the computer program to perform the multi-step intent recognition and motion prediction method.
To achieve the above and other related objects, the present application provides a computer storage medium storing a computer program, wherein the computer program implements the multi-step intention recognition and motion prediction method when running.
As described above, the modification judgment and seat detection model training method, system, terminal and medium of the application have the following beneficial effects: the human-computer interaction system solves the defect that a human-computer interaction system which depends on a physical sensor in the prior art cannot continuously identify and predict in real time due to certain hysteresis, and the problem that the identification and the prediction of the electromyographic signals aiming at the lower limb movement far lag behind the identification and the prediction of the movement pattern of the upper limb are solved on the aspect of the electromyographic signals which are advanced to the human body movement. The application accurately classifies and predicts several typical gait of the person, so that the walking mode can be timely identified, and the motion trail can be continuously identified and predicted in real time.
Drawings
FIG. 1 is a flow chart of a multi-step intent recognition and motion prediction method according to an embodiment of the application.
FIG. 2 is a schematic diagram of a multi-step intent recognition and motion prediction system in accordance with one embodiment of the present application.
Fig. 3 is a schematic diagram showing a structure of a multi-step intention recognition and motion prediction terminal according to an embodiment of the present application.
Detailed Description
Other advantages and effects of the present application will become apparent to those skilled in the art from the following disclosure, which describes the embodiments of the present application with reference to specific examples. The application may be practiced or carried out in other embodiments that depart from the specific details, and the details of the present description may be modified or varied from the spirit and scope of the present application. It should be noted that the following embodiments and features in the embodiments may be combined with each other without conflict.
In the following description, reference is made to the accompanying drawings, which illustrate several embodiments of the application. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present application. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present application is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "below," "lower," "above," "upper," and the like, may be used herein to facilitate a description of one element or feature as illustrated in the figures relative to another element or feature.
Throughout the specification, when a portion is said to be "coupled" to another portion, this includes not only the case of "direct connection" but also the case of "indirect connection" with other elements interposed therebetween. In addition, when a certain component is said to be "included" in a certain section, unless otherwise stated, other components are not excluded, but it is meant that other components may be included.
The first, second, and third terms are used herein to describe various portions, components, regions, layers and/or sections, but are not limited thereto. These terms are only used to distinguish one portion, component, region, layer or section from another portion, component, region, layer or section. Thus, a first portion, component, region, layer or section discussed below could be termed a second portion, component, region, layer or section without departing from the scope of the present application.
Furthermore, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," "includes," and/or "including" specify the presence of stated features, operations, elements, components, items, categories, and/or groups, but do not preclude the presence, presence or addition of one or more other features, operations, elements, components, items, categories, and/or groups. The terms "or" and/or "as used herein are to be construed as inclusive, or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a, A is as follows; b, a step of preparing a composite material; c, performing operation; a and B; a and C; b and C; A. b and C). An exception to this definition will occur only when a combination of elements, functions or operations are in some way inherently mutually exclusive.
The man-machine interaction system which is mostly used in the prior art and depends on a physical sensor can not realize real-time continuous recognition and prediction due to a certain hysteresis problem, and the recognition and prediction of the electromyographic signals to the lower limb motion are far behind the recognition and prediction of the motion mode of the upper limb in the aspect of the electromyographic signals which are advanced to the human body motion, and the method is particularly suitable for continuous prediction.
Therefore, the application provides a multi-step intention recognition and motion prediction method, which is used for solving the defect that a human-computer interaction system relying on a physical sensor in the prior art cannot recognize and predict continuously in real time due to certain hysteresis, and the problem that the recognition and prediction of the motion pattern of the lower limb by using the myoelectric signal is far behind the recognition and prediction of the motion pattern of the upper limb in the aspect of the myoelectric signal which advances the human motion. The application accurately classifies and predicts several typical gait of the person, so that the walking mode can be timely identified, and the motion trail can be continuously identified and predicted in real time.
The method comprises the following steps:
Collecting myoelectric signals and various joint angle values of the legs of a wearer under different gaits, and preprocessing the myoelectric signals and the joint angle values;
extracting gait feature vectors according to the preprocessed electromyographic signals and the angle values of all joints;
constructing a motion intention recognition model according to the gait feature vector for predicting motion intention; and/or constructing a joint angle recognition model according to the gait feature vector for predicting the joint angle.
As shown in fig. 1, a flow diagram of a multi-step intent recognition and motion prediction method in an embodiment of the present application is shown.
The method comprises the following steps:
s101: myoelectric signals and various joint angle values of the legs of the wearer under different gaits are collected and preprocessed.
Optionally, the myoelectric signals and the joint angle values of the exoskeleton wearer under different gait channels are collected, and the myoelectric signals and the joint angle values are preprocessed respectively.
Optionally, the electromyographic signals and the joint angle values of the exoskeleton wearer under multiple channels under different gait are acquired in real time.
Optionally, denoising the electromyographic signals, and smoothing the joint angle value.
Optionally, the electromyographic signal acquisition is acquired by a surface electromyographic signal sensor mounted in a position of the skin surface muscle myoabdomen that can function in different walking modes.
Optionally, each joint angle is measured by an angle sensor placed near the joint or an angle sensor built in the motor.
S102: extracting gait feature vectors according to the preprocessed electromyographic signals and the angle values of all joints;
optionally, gait feature vectors are extracted according to the processed electromyographic signals and the angle values of each joint, and the gait feature vectors comprise a plurality of feature values.
Optionally, each feature vector of the feature vectors extracted in the feature value extraction is a multidimensional feature vector formed by an integral value of the electromyographic signals of each acquisition channel of each frame of data in the electromyographic signals, a state value of the electromyographic signals of each acquisition channel, and angle values of three joints of the right leg corresponding to a first sampling point in the frame of data.
Alternatively, the electromyographic signal state value is an active state of the muscle, and after endpoint detection of the surface electromyographic signal, the electromyographic signal state between the start and end endpoints is defined as a "1" value, and the other electromyographic relaxation states are defined as "0" values.
Optionally, the electromyographic signal state is determined according to using an electromyographic signal integral value and a zero crossing number.
Optionally, the electromyographic signal state is an active state of a muscle, wherein the electromyographic signal state of the electromyographic signals of each acquisition channel of one frame of data is extracted by performing endpoint detection on the electromyographic signals.
Optionally, the endpoint detection method detects an electromyographic signal activity start time and an activity end time.
Optionally, the detection condition of the active starting moment is: the integral value of the electromyographic signal is larger than the maximum integral value, and the zero crossing number is larger than or equal to the zero crossing number threshold value of the electromyographic signal in a static relaxed state.
Optionally, the detection condition of the active end time is as follows: the integral value of the electromyographic signal is smaller than the maximum integral value and the zero crossing number is smaller than the threshold value of the zero crossing number in the static electricity relaxation state.
Optionally, the zero crossing threshold is an absolute value with larger maximum and minimum values of the electromyographic signals in a static relaxation state.
Optionally, the calculation formula of the zero crossing number of the ith frame is as follows:
the larger absolute value of the maximum and minimum values of the electromyographic signals in the static relaxation state is calculated and marked as Th, and N is a positive integer from 1 to N as a threshold value for calculating the zero crossing number.
Optionally, the maximum myoelectric value of the myoelectric signal in the static and relaxed state is used as a detection threshold, a proper zero-crossing threshold is set according to the zero-crossing number of the myoelectric signal in the static and relaxed state, when the myoelectric value and the zero-crossing number of a certain frame are larger than or equal to the set threshold and last for a period of time, the frame is considered to be the starting moment of the myoelectric signal activity, and similarly, when the myoelectric value and the zero-crossing number of the certain frame are smaller than the set threshold and last for a period of time, the frame is considered to be the ending moment of the myoelectric signal activity.
Optionally, the calculation formula of the integral value of the electromyographic signal of the ith frame is:
Wherein N is a sampling point, xi (N) is an electromyographic signal of an nth sampling point of an ith frame.
Optionally, the calculation formula of the zero crossing number of the ith frame is as follows:
ZCR (i) is the zero-crossing number of the nth sampling point of the ith frame.
S103: constructing a motion intention recognition model according to the gait feature vector for predicting motion intention; and/or constructing a joint angle recognition model according to the gait feature vector for predicting the joint angle.
Optionally, a motion intention recognition model is constructed to predict the motion intention according to the gait feature vector.
Optionally, constructing a joint angle recognition model to predict the joint angle according to the gait feature vector.
Optionally, a motion intention recognition model is constructed according to the gait feature vector to predict the motion intention, and a joint angle recognition model is constructed according to the gait feature vector to predict the joint angle.
Optionally, the training of the motion intention recognition model uses a support vector machine with better accuracy and universality for lower limb motion prediction based on surface electromyographic signals and using a gaussian kernel function to train the motion intention recognition model, and the myoelectric accumulation values of the multidimensional feature vectors and the corresponding motion state phase labels form a training data set to train the gaussian kernel support vector machine.
Optionally, the joint angle prediction model training uses a linear kernel support vector machine to train the joint angle prediction model, and forms the feature vector belonging to the same gait phase and the corresponding joint angle into a data set of each gait phase to train the joint angle prediction model of the linear kernel support vector machine.
The following are combined with examples
Example 1:
A method for identifying and predicting the real-time multi-gait intention of lower limbs aims at the recognition of gait phases and the prediction of joint angle values of hip joints, knee joints and ankle joints of right legs by aiming at the motion state of a horizontally placed running machine which slowly runs at a constant speed of 6KM/h on an adult man wearer.
The specific implementation steps are as follows:
First, surface myoelectric signals and hip joint, knee joint and ankle joint angle values of the positions of the extensor digitorum longus, the medial gastrocnemius, the anterior tibial muscle, the rectus femoris, the lateral femoral muscle, the medial femoral muscle and the biceps femoris are collected under the motion state that a wearer jogges and advances at a constant speed of 6KM/h on a horizontally placed running machine, the obtained surface myoelectric signals are subjected to noise reduction, and the joint angle values are smoothed.
And secondly, carrying out characteristic extraction on the obtained surface electromyographic signal data after noise reduction and the joint angle data to obtain characteristic values and forming characteristic vectors. The characteristic vector consists of myoelectric signal value of the surface myoelectric signal of each channel, myoelectric signal state of the myoelectric signal of each acquisition channel and angle values of three joints at the current moment.
Thirdly, predicting the movement intention based on the trained movement intention recognition model, wherein the characteristic vector and the gait phase category corresponding to the characteristic vector are input into a Gaussian kernel support vector machine for movement intention recognition model training, and the gait is divided into four phases in a jogging state, namely a swing early stage numbered 1, a swing later stage numbered 2, a support early stage numbered 3 and a support later stage numbered 4.
The maximum myoelectric value of the surface electromyographic signals at each position in the standing still and relaxing state is stored as an average zero-crossing number by storing M1, M11, M12, M13, M21, M22, M23, M31, M32, M33, M41, M42 and M43.
Similar to the principles of the embodiments described above, the present application provides a multi-step intent recognition and motion prediction system, the system comprising:
The preprocessing module is used for acquiring myoelectric signals of the legs of a wearer under different gaits and angle values of all joints and preprocessing the myoelectric signals;
The characteristic vector acquisition module is coupled with the preprocessing module and is used for extracting characteristic values from the preprocessed electromyographic signals and the angle values of all joints;
a motion intention prediction module, coupled to the feature vector acquisition module, for predicting a motion intention based on the trained motion intention recognition model;
and the joint angle prediction module is coupled with the feature vector acquisition module and is used for predicting the joint angle based on the trained joint angle identification model.
Specific embodiments are provided below with reference to the accompanying drawings:
a schematic diagram of a multi-step intent recognition and motion prediction system in accordance with an embodiment of the present application is shown in fig. 2.
The system comprises:
The preprocessing module 21 is used for acquiring myoelectric signals and various joint angle values of the legs of the wearer under different gaits and preprocessing the myoelectric signals and the joint angle values;
The feature vector acquisition module 22 is coupled to the preprocessing module 21, and is configured to extract gait feature vectors according to the preprocessed electromyographic signals and the angle values of the joints;
a motion intention prediction module 23 coupled to the feature vector acquisition module 22 for constructing a motion intention recognition model from the gait feature vector for predicting motion intention; and/or constructing a joint angle recognition model according to the gait feature vector for predicting the joint angle.
Optionally, the electromyographic signals and the joint angle values of the exoskeleton wearer under multiple channels under different gait are acquired in real time.
Optionally, denoising the electromyographic signals, and smoothing the joint angle value.
Optionally, the electromyographic signal acquisition is acquired by a surface electromyographic signal sensor mounted in a position of the skin surface muscle myoabdomen that can function in different walking modes.
Optionally, each joint angle is measured by an angle sensor placed near the joint or an angle sensor built in the motor.
Optionally, gait feature vectors are extracted according to the processed electromyographic signals and the angle values of each joint, and the gait feature vectors comprise a plurality of feature values.
Optionally, each feature vector of the feature vectors extracted in the feature value extraction is a multidimensional feature vector formed by an integral value of the electromyographic signals of each acquisition channel of each frame of data in the electromyographic signals, a state value of the electromyographic signals of each acquisition channel, and angle values of three joints of the right leg corresponding to a first sampling point in the frame of data.
Alternatively, the electromyographic signal state value is an active state of the muscle, and after endpoint detection of the surface electromyographic signal, the electromyographic signal state between the start and end endpoints is defined as a "1" value, and the other electromyographic relaxation states are defined as "0" values.
Optionally, the electromyographic signal state is determined according to using an electromyographic signal integral value and a zero crossing number.
Optionally, the electromyographic signal state is an active state of a muscle, wherein the electromyographic signal state of the electromyographic signals of each acquisition channel of one frame of data is extracted by performing endpoint detection on the electromyographic signals.
Optionally, the endpoint detection method detects an electromyographic signal activity start time and an activity end time.
Optionally, the detection condition of the active starting moment is: the integral value of the electromyographic signal is larger than the maximum integral value, and the zero crossing number is larger than or equal to the zero crossing number threshold value of the electromyographic signal in a static relaxed state.
Optionally, the detection condition of the active end time is as follows: the integral value of the electromyographic signal is smaller than the maximum integral value and the zero crossing number is smaller than the threshold value of the zero crossing number in the static electricity relaxation state.
Optionally, the zero crossing threshold is an absolute value with larger maximum and minimum values of the electromyographic signals in a static relaxation state.
Optionally, the calculation formula of the zero crossing number of the ith frame is as follows:
the larger absolute value of the maximum and minimum values of the electromyographic signals in the static relaxation state is calculated and marked as Th, and N is a positive integer from 1 to N as a threshold value for calculating the zero crossing number.
Optionally, the maximum myoelectric value of the myoelectric signal in the static and relaxed state is used as a detection threshold, a proper zero-crossing threshold is set according to the zero-crossing number of the myoelectric signal in the static and relaxed state, when the myoelectric value and the zero-crossing number of a certain frame are larger than or equal to the set threshold and last for a period of time, the frame is considered to be the starting moment of the myoelectric signal activity, and similarly, when the myoelectric value and the zero-crossing number of the certain frame are smaller than the set threshold and last for a period of time, the frame is considered to be the ending moment of the myoelectric signal activity.
Optionally, the calculation formula of the integral value of the electromyographic signal of the ith frame is:
Wherein N is a sampling point, xi (N) is an electromyographic signal of an nth sampling point of an ith frame.
Optionally, the calculation formula of the zero crossing number of the ith frame is as follows:
ZCR (i) is the zero-crossing number of the nth sampling point of the ith frame.
Optionally, a motion intention recognition model is constructed to predict the motion intention according to the gait feature vector.
Optionally, constructing a joint angle recognition model to predict the joint angle according to the gait feature vector.
Optionally, a motion intention recognition model is constructed according to the gait feature vector to predict the motion intention, and a joint angle recognition model is constructed according to the gait feature vector to predict the joint angle.
Optionally, the training of the motion intention recognition model uses a support vector machine with better accuracy and universality for lower limb motion prediction based on surface electromyographic signals and using a gaussian kernel function to train the motion intention recognition model, and the myoelectric accumulation values of the multidimensional feature vectors and the corresponding motion state phase labels form a training data set to train the gaussian kernel support vector machine.
Optionally, the joint angle prediction model training uses a linear kernel support vector machine to train the joint angle prediction model, and forms the feature vector belonging to the same gait phase and the corresponding joint angle into a data set of each gait phase to train the joint angle prediction model of the linear kernel support vector machine.
As shown in fig. 3, a schematic diagram of a multi-step intent recognition and motion prediction terminal 30 in an embodiment of the present application is shown.
The multi-step intention recognition and motion prediction terminal 30 includes: a memory 31 and a processor 32, the memory 31 for storing a computer program; the processor 32 runs a computer program to implement the multi-step intent recognition and motion prediction method as described in fig. 1.
Alternatively, the number of the memories 31 may be one or more, and the number of the processors 32 may be one or more, and one is taken as an example in fig. 3.
Optionally, the processor 32 in the multi-step intention recognition and motion prediction terminal 30 loads one or more instructions corresponding to the process of the application program into the memory 31 according to the steps as described in fig. 2, and the processor 32 executes the application program stored in the memory 31, thereby implementing various functions in the multi-step intention recognition and motion prediction method as described in fig. 1.
Optionally, the memory 31 may include, but is not limited to, high speed random access memory, nonvolatile memory. Such as one or more disk storage devices, flash memory devices, or other non-volatile solid-state storage devices; the processor 32 may include, but is not limited to, a central processing unit (Central Processing Unit, CPU for short), a network processor (Network Processor, NP for short), etc.; but may also be a digital signal processor (DIGITAL SIGNAL Processing, DSP), application Specific Integrated Circuit (ASIC), field-Programmable gate array (FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components.
Alternatively, the processor 32 may be a general-purpose processor, including a central processing unit (Central Processing Unit, abbreviated as CPU), a network processor (Network Processor, abbreviated as NP), and the like; but may also be a digital signal processor (DIGITAL SIGNAL Processing, DSP), application Specific Integrated Circuit (ASIC), field-Programmable gate array (FPGA) or other Programmable logic device, discrete gate or transistor logic device, discrete hardware components.
The present application also provides a computer readable storage medium storing a computer program which, when run, implements a multi-step intent recognition and motion prediction method as shown in fig. 1. The computer-readable storage medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs (compact disk-read only memories), magneto-optical disks, ROMs (read-only memories), RAMs (random access memories), EPROMs (erasable programmable read only memories), EEPROMs (electrically erasable programmable read only memories), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing machine-executable instructions. The computer readable storage medium may be an article of manufacture that is not accessed by a computer device or may be a component used by an accessed computer device.
In summary, the multi-step intention recognition and motion prediction method, system, terminal and medium solve the problem that the lower limb assistance exoskeleton robot in the prior art cannot recognize and predict in high real time in the motion process. Therefore, the application effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles of the present application and its effectiveness, and are not intended to limit the application. Modifications and variations may be made to the above-described embodiments by those skilled in the art without departing from the spirit and scope of the application. Accordingly, it is intended that all equivalent modifications and variations of the application be covered by the claims, which are within the ordinary skill of the art, be within the spirit and scope of the present disclosure.

Claims (5)

1. A multi-step intent recognition and motion prediction method, comprising:
Collecting myoelectric signals and various joint angle values of the legs of a wearer under different gaits, and preprocessing the myoelectric signals and the joint angle values;
extracting gait feature vectors according to the preprocessed electromyographic signals and the angle values of all joints;
Wherein the gait feature vector comprises: the electromyographic signal integral value of each acquisition channel of each frame of data in the electromyographic signals, the electromyographic signal state value of the electromyographic signals of each acquisition channel and the angle values of three joints of the right leg corresponding to the first sampling point in the frame of data;
And wherein the electromyographic signal state is associated with the electromyographic signal integral value and a zero crossing number; the electromyographic signal state is an active state of a muscle; extracting the electromyographic signal states of the electromyographic signals of each acquisition channel of one frame of data by carrying out end point detection on the electromyographic signals; the endpoint detection method is used for detecting the active starting time and the active ending time of the electromyographic signals; the detection conditions of the active starting time are as follows: the integral value of the electromyographic signal is larger than the maximum integral value, and the zero crossing number is larger than or equal to the zero crossing number threshold value of the electromyographic signal in a static relaxed state; the detection condition of the active ending moment is as follows: the integral value of the electromyographic signals is smaller than the maximum integral value and the zero crossing number is smaller than a zero crossing number threshold value in an electrostatic relaxation state; the zero crossing threshold value is an absolute value with larger maximum value and minimum value of the electromyographic signals in a static relaxation state; the maximum integrated value is the maximum myoelectric value of the myoelectric signal in a static relaxation state;
Training a Gaussian kernel support vector machine according to the integral value in the gait feature vector and the electromyographic signal state value, and constructing a movement intention recognition model according to the Gaussian kernel support vector machine so as to be used for predicting movement intention; and/or training a linear kernel support vector machine according to the integral value in the gait feature vector and the electromyographic signal state value, so as to construct a joint angle recognition model for predicting the joint angle.
2. The multi-step intent recognition and motion prediction method according to claim 1, wherein preprocessing the electromyographic signals and the respective joint angle values includes: denoising the electromyographic signals and smoothing the angle values.
3. A multi-step intent recognition and motion prediction system, comprising:
The preprocessing module is used for acquiring myoelectric signals of the legs of a wearer under different gaits and angle values of all joints and preprocessing the myoelectric signals;
the feature vector acquisition module is coupled with the preprocessing module and is used for extracting gait feature vectors according to the preprocessed electromyographic signals and the angle values of all joints;
Wherein the gait feature vector comprises: the electromyographic signal integral value of each acquisition channel of each frame of data in the electromyographic signals, the electromyographic signal state value of the electromyographic signals of each acquisition channel and the angle values of three joints of the right leg corresponding to the first sampling point in the frame of data;
And wherein the electromyographic signal state is associated with the electromyographic signal integral value and a zero crossing number; the electromyographic signal state is an active state of a muscle; extracting the electromyographic signal states of the electromyographic signals of each acquisition channel of one frame of data by carrying out end point detection on the electromyographic signals; the endpoint detection method is used for detecting the active starting time and the active ending time of the electromyographic signals; the detection conditions of the active starting time are as follows: the integral value of the electromyographic signal is larger than the maximum integral value, and the zero crossing number is larger than or equal to the zero crossing number threshold value of the electromyographic signal in a static relaxed state; the detection condition of the active ending moment is as follows: the integral value of the electromyographic signals is smaller than the maximum integral value and the zero crossing number is smaller than a zero crossing number threshold value in an electrostatic relaxation state; the zero crossing threshold value is an absolute value with larger maximum value and minimum value of the electromyographic signals in a static relaxation state; the maximum integrated value is the maximum myoelectric value of the myoelectric signal in a static relaxation state;
the prediction module is coupled with the feature vector acquisition module and is used for training a Gaussian kernel support vector machine according to the integral value and the electromyographic signal state value in the gait feature vector so as to construct a motion intention recognition model for predicting motion intention; and/or training a linear kernel support vector machine according to the integral value in the gait feature vector and the electromyographic signal state value, so as to construct a joint angle recognition model for predicting the joint angle.
4. A multi-step intent recognition and motion prediction terminal, comprising:
A memory for storing a computer program;
a processor for executing the computer program to perform the multi-step intent recognition and motion prediction method as claimed in any one of claims 1 to 2.
5. A computer storage medium, characterized in that a computer program and a second computer program are stored, wherein the computer program, when run, implements the multi-step intent recognition and motion prediction method as claimed in any one of claims 1 to 2.
CN201911120066.3A 2019-11-15 Multi-step intention recognition and motion prediction method, system, terminal and medium Active CN112807001B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911120066.3A CN112807001B (en) 2019-11-15 Multi-step intention recognition and motion prediction method, system, terminal and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911120066.3A CN112807001B (en) 2019-11-15 Multi-step intention recognition and motion prediction method, system, terminal and medium

Publications (2)

Publication Number Publication Date
CN112807001A CN112807001A (en) 2021-05-18
CN112807001B true CN112807001B (en) 2024-06-04

Family

ID=

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104799854A (en) * 2015-04-29 2015-07-29 深圳大学 Surface myoelectricity acquisition device and myoelectricity signal processing method thereof
CN105615890A (en) * 2015-12-24 2016-06-01 西安交通大学 Angle and myoelectricity continuous decoding method for human body lower limb walking joint
CN105892676A (en) * 2016-04-26 2016-08-24 中国科学院自动化研究所 Human-machine interaction device, system and method of vascular intervention operation wire feeder
CN106955111A (en) * 2017-04-21 2017-07-18 海南大学 Brain paralysis youngster's gait recognition method based on surface electromyogram signal
CN107016233A (en) * 2017-03-14 2017-08-04 中国科学院计算技术研究所 The association analysis method and system of motor behavior and cognitive ability
CN107397649A (en) * 2017-08-10 2017-11-28 燕山大学 A kind of upper limbs exoskeleton rehabilitation robot control method based on radial base neural net
CN108874149A (en) * 2018-07-28 2018-11-23 华中科技大学 A method of continuously estimating human synovial angle based on surface electromyogram signal
CN109276245A (en) * 2018-11-01 2019-01-29 重庆中科云丛科技有限公司 A kind of surface electromyogram signal characteristic processing and joint angles prediction technique and system
WO2019095055A1 (en) * 2017-11-15 2019-05-23 Uti Limited Partnership Method and system utilizing pattern recognition for detecting atypical movements during physical activity

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104799854A (en) * 2015-04-29 2015-07-29 深圳大学 Surface myoelectricity acquisition device and myoelectricity signal processing method thereof
CN105615890A (en) * 2015-12-24 2016-06-01 西安交通大学 Angle and myoelectricity continuous decoding method for human body lower limb walking joint
CN105892676A (en) * 2016-04-26 2016-08-24 中国科学院自动化研究所 Human-machine interaction device, system and method of vascular intervention operation wire feeder
CN107016233A (en) * 2017-03-14 2017-08-04 中国科学院计算技术研究所 The association analysis method and system of motor behavior and cognitive ability
CN106955111A (en) * 2017-04-21 2017-07-18 海南大学 Brain paralysis youngster's gait recognition method based on surface electromyogram signal
CN107397649A (en) * 2017-08-10 2017-11-28 燕山大学 A kind of upper limbs exoskeleton rehabilitation robot control method based on radial base neural net
WO2019095055A1 (en) * 2017-11-15 2019-05-23 Uti Limited Partnership Method and system utilizing pattern recognition for detecting atypical movements during physical activity
CN108874149A (en) * 2018-07-28 2018-11-23 华中科技大学 A method of continuously estimating human synovial angle based on surface electromyogram signal
CN109276245A (en) * 2018-11-01 2019-01-29 重庆中科云丛科技有限公司 A kind of surface electromyogram signal characteristic processing and joint angles prediction technique and system

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
A surface electromyography-based pre-impact fall detection method;Xiao jinzhuang;2018 Chinese Automation Congress;20181202;681-685 *
上肢高密度sEMG特征分类研究;夏春雨;信息科技;20190115;1-69 *
人体下肢姿态及运动状态预测;骆无意;中国优秀硕士学位论文全文数据库工程科技Ⅱ辑(第2019年第04期期);C030-91 *
骆无意.人体下肢姿态及运动状态预测.中国优秀硕士学位论文全文数据库工程科技Ⅱ辑.2019,(第2019年第04期期),C030-91. *

Similar Documents

Publication Publication Date Title
CN110537922B (en) Human body walking process lower limb movement identification method and system based on deep learning
Joshi et al. Classification of gait phases from lower limb EMG: Application to exoskeleton orthosis
Camargo et al. A machine learning strategy for locomotion classification and parameter estimation using fusion of wearable sensors
Ryu et al. Real-time gait subphase detection using an EMG signal graph matching (ESGM) algorithm based on EMG signals
CN113043248B (en) Transportation and assembly whole-body exoskeleton system based on multi-source sensor and control method
CN111898487A (en) Human motion mode real-time identification method of flexible exoskeleton system
Lee et al. Abnormal gait recognition using 3D joint information of multiple Kinects system and RNN-LSTM
Liu et al. sEMG-based continuous estimation of knee joint angle using deep learning with convolutional neural network
CN112949676B (en) Self-adaptive motion mode identification method of flexible lower limb assistance exoskeleton robot
CN106361346A (en) Method for computing hand rehabilitation indexes based on sensing technology
Zhu et al. An attention-based cnn-lstm model with limb synergy for joint angles prediction
Song et al. Adaptive neural fuzzy reasoning method for recognizing human movement gait phase
Mallikarjuna et al. Feedback-based gait identification using deep neural network classification
CN112807001B (en) Multi-step intention recognition and motion prediction method, system, terminal and medium
CN112487902B (en) Exoskeleton-oriented gait phase classification method based on TCN-HMM
Song et al. Continuous online prediction of lower limb joints angles based on sEMG signals by deep learning approach
CN112807001A (en) Multi-modal intent recognition and motion prediction method, system, terminal, and medium
Ma et al. A real-time gait switching method for lower-limb exoskeleton robot based on sEMG signals
Zhang et al. A real-time gait phase recognition method based on multi-information fusion
CN115281657A (en) Human body gait recognition method of flexible exoskeleton system
CN113910206B (en) Exoskeleton power assisting system combined with multiple sensors
Wang et al. Terrain recognition and gait cycle prediction using imu
Ren et al. Gait phase recognition of multi-mode locomotion based on multi-layer perceptron for the plantar pressure measurement system
Cao et al. Research on human sports rehabilitation design based on object-oriented technology
Chen et al. An adaptive gait learning strategy for lower limb exoskeleton robot

Legal Events

Date Code Title Description
PB01 Publication
SE01 Entry into force of request for substantive examination
GR01 Patent grant