CN112807001A - Multi-modal intent recognition and motion prediction method, system, terminal, and medium - Google Patents

Multi-modal intent recognition and motion prediction method, system, terminal, and medium Download PDF

Info

Publication number
CN112807001A
CN112807001A CN201911120066.3A CN201911120066A CN112807001A CN 112807001 A CN112807001 A CN 112807001A CN 201911120066 A CN201911120066 A CN 201911120066A CN 112807001 A CN112807001 A CN 112807001A
Authority
CN
China
Prior art keywords
joint angle
electromyographic
electromyographic signals
motion prediction
predict
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911120066.3A
Other languages
Chinese (zh)
Other versions
CN112807001B (en
Inventor
段有康
陈小刚
桂剑
马斌
赵婷
崔毅
沈芸
李顺芬
宋志棠
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Zhongyan Jiuyi Technology Co ltd
Original Assignee
Shanghai Zhongyan Jiuyi Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Zhongyan Jiuyi Technology Co ltd filed Critical Shanghai Zhongyan Jiuyi Technology Co ltd
Priority to CN201911120066.3A priority Critical patent/CN112807001B/en
Publication of CN112807001A publication Critical patent/CN112807001A/en
Application granted granted Critical
Publication of CN112807001B publication Critical patent/CN112807001B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/112Gait analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1121Determining geometric values, e.g. centre of rotation or angular range of movement
    • A61B5/1122Determining geometric values, e.g. centre of rotation or angular range of movement of movement trajectories
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7271Specific aspects of physiological measurement analysis
    • A61B5/7275Determining trends in physiological measurement data; Predicting development of a medical condition based on physiological measurements, e.g. determining a risk factor

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Physiology (AREA)
  • Artificial Intelligence (AREA)
  • Biophysics (AREA)
  • Pathology (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • Psychiatry (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Evolutionary Computation (AREA)
  • Fuzzy Systems (AREA)
  • Mathematical Physics (AREA)
  • Geometry (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The application provides a multi-modal intention recognition and motion prediction method, system, terminal and medium, comprising: collecting electromyographic signals and joint angle values of two legs of a wearer in different states, and preprocessing the electromyographic signals and the joint angle values; extracting gait feature vectors according to the preprocessed electromyographic signals and the angle values of all joints; constructing a movement intention recognition model according to the gait feature vector to predict a movement intention; and/or constructing a joint angle identification model according to the gait feature vector to predict a joint angle. The problem that a human-computer interaction system depending on a physical sensor in the prior art cannot continuously identify and predict in real time due to the fact that a certain lag problem is generated is solved, and the problem that the identification and prediction of the lower limb movement by using the electromyographic signals are far behind the identification and prediction of the movement pattern of the upper limb in the aspect of the electromyographic signals ahead of the human body movement is solved. The method and the device accurately classify and predict several typical gaits of people, so that the walking mode can be recognized in time, and the motion trail can be recognized and predicted continuously in real time.

Description

Multi-modal intent recognition and motion prediction method, system, terminal, and medium
Technical Field
The present application relates to the field of robotics, and in particular, to a multi-modal intent recognition and motion prediction method, system, terminal, and medium.
Background
The lower limb exoskeleton power-assisted robot is a human body auxiliary mechanical device which can identify the motion state of the lower limbs of a human body, provide power assistance and enhance the capability of the human body. The exoskeleton system is a man-machine coupling device of a person in an inner ring, and the movement of the person is inevitably identified and predicted to sense the movement of the person and assist the movement of the person. In the human perception method of the robot, a force sensor, a position sensor and the like generate a complex sensing system, so that a user feels a certain degree of unnaturalness, and a natural delay phenomenon exists between a nerve signal and an action because a delay of 100ms exists between the actual action of the human body and the nerve signal. Skeletal muscle is a power source for driving the limb movement of a person, the muscle movement also corresponds to the limb movement, and surface electromyographic signals reflecting the muscle movement state have become important means of human-computer interaction due to the advantages of convenient acquisition and surface non-wound.
The hip joint, knee joint and ankle joint of the lower limb are the most important joints for human body movement, and have great contribution to the flexibility and balance of the human body. Currently, exoskeleton rehabilitation or power-assisted robots applied to lower limbs mostly use sole pressure, interaction force between the robots and other sensors placed at joint parts to acquire motion information of human bodies. Like an HAL exoskeleton power-assisted robot in Japan college of building waves, the plantar pressure sensor is used for acquiring information of plantar pressure change in a walking state to judge the walking phase of a human body. However, the human-computer interaction system depending on the physical sensor has the defect that the real-time continuous recognition and prediction cannot be performed due to the problem of certain lag, and particularly, the recognition and prediction of the lower limb movement by using the electromyographic signal is far behind the recognition and prediction of the movement pattern of the upper limb in the aspect of the electromyographic signal ahead of the human body movement, and the continuous prediction is performed.
Content of application
In view of the above-mentioned shortcomings of the prior art, the present application aims to provide a multi-step intention recognition and motion prediction method, system, terminal and medium, which are used to solve the problem that the human-computer interaction system relying on physical sensors in the prior art cannot continuously recognize and predict in real time due to a certain lag problem, and for the problem that the recognition and prediction of the lower limb motion by using the electromyographic signals is far behind the recognition and prediction of the motion pattern of the upper limb in terms of the electromyographic signals ahead of the human body motion.
To achieve the above and other related objects, the present application provides a multi-modal intent recognition and motion prediction method, including: collecting electromyographic signals and joint angle values of two legs of a wearer in different states, and preprocessing the electromyographic signals and the joint angle values; extracting gait feature vectors according to the preprocessed electromyographic signals and the angle values of all joints; constructing a movement intention recognition model according to the gait feature vector to predict a movement intention; and/or constructing a joint angle identification model according to the gait feature vector to predict a joint angle.
In an embodiment of the present application, the method includes: the gait feature vector comprises: the electromyographic signal integral value of each acquisition channel of each frame of data in the electromyographic signals, the electromyographic signal state value of each acquisition channel and the angle value of the three joints of the right leg corresponding to the first sampling point in the frame of data.
In an embodiment of the application, a gaussian kernel support vector machine is trained by using the integral value and the myoelectric state value to construct the exercise intention recognition model.
In an embodiment of the present application, a linear kernel support vector machine is trained by using the integral value and the myoelectric state value, so as to construct the joint angle recognition model.
In an embodiment of the present application, the preprocessing the electromyographic signal and the joint angle values includes: denoising the electromyographic signals and smoothing the angle values.
In an embodiment of the present application, the electromyographic signal status is associated with the electromyographic signal integral value and a zero crossing number.
To achieve the above and other related objects, the present application provides a multi-modal intent recognition and motion prediction system, comprising: the preprocessing module is used for acquiring electromyographic signals and joint angle values of two legs of a wearer in different states and preprocessing the electromyographic signals and the joint angle values; the characteristic vector acquisition module is coupled with the preprocessing module and used for extracting gait characteristic vectors according to the preprocessed electromyographic signals and the angle values of all joints; the prediction module is coupled with the characteristic vector acquisition module and used for constructing a movement intention recognition model according to the gait characteristic vector so as to predict a movement intention; and/or constructing a joint angle identification model according to the gait feature vector to predict a joint angle.
In an embodiment of the present application, the gait feature vector includes: the method comprises the steps of obtaining the electromyographic signal integral value of each acquisition channel of frame data, the electromyographic signal state value of each acquisition channel and the angle values of three joints of the right leg corresponding to a first sampling point in the frame data.
To achieve the above and other related objects, the present application provides a multi-modal intention recognition and motion prediction terminal, including: a memory for storing a computer program; a processor for executing the computer program to perform the multi-modal intent recognition and motion prediction method.
To achieve the above and other related objects, the present application provides a computer storage medium storing a computer program, wherein the computer program implements the multi-modal intent recognition and motion prediction method when running.
As described above, the modification judgment and seat detection model training method, system, terminal and medium of the present application have the following beneficial effects: the problem that a human-computer interaction system depending on a physical sensor in the prior art cannot continuously identify and predict in real time due to the fact that a certain lag problem is generated is solved, and the problem that the identification and prediction of the lower limb movement by using the electromyographic signals are far behind the identification and prediction of the movement pattern of the upper limb in the aspect of the electromyographic signals ahead of the human body movement is solved. The method and the device accurately classify and predict several typical gaits of people, so that the walking mode can be recognized in time, and the motion trail can be recognized and predicted continuously in real time.
Drawings
Fig. 1 is a flow chart illustrating a multi-step intention recognition and motion prediction method according to an embodiment of the present application.
Fig. 2 is a schematic structural diagram of a multi-step intention recognition and motion prediction system in an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a multi-step intention recognition and motion prediction terminal in an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application is provided by way of specific examples, and other advantages and effects of the present application will be readily apparent to those skilled in the art from the disclosure herein. The present application is capable of other and different embodiments and its several details are capable of modifications and/or changes in various respects, all without departing from the spirit of the present application. It is to be noted that the features in the following embodiments and examples may be combined with each other without conflict.
It is noted that in the following description, reference is made to the accompanying drawings which illustrate several embodiments of the present application. It is to be understood that other embodiments may be utilized and that mechanical, structural, electrical, and operational changes may be made without departing from the spirit and scope of the present application. The following detailed description is not to be taken in a limiting sense, and the scope of embodiments of the present application is defined only by the claims of the issued patent. The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. Spatially relative terms, such as "upper," "lower," "left," "right," "lower," "below," "lower," "over," "upper," and the like, may be used herein to facilitate describing one element or feature's relationship to another element or feature as illustrated in the figures.
Throughout the specification, when a part is referred to as being "coupled" to another part, this includes not only a case of being "directly connected" but also a case of being "indirectly connected" with another element interposed therebetween. In addition, when a certain part is referred to as "including" a certain component, unless otherwise stated, other components are not excluded, but it means that other components may be included.
The terms first, second, third, etc. are used herein to describe various elements, components, regions, layers and/or sections, but are not limited thereto. These terms are only used to distinguish one element, component, region, layer or section from another element, component, region, layer or section. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the scope of the present application.
Also, as used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context indicates otherwise. It will be further understood that the terms "comprises," "comprising," and/or "comprising," when used in this specification, specify the presence of stated features, operations, elements, components, items, species, and/or groups, but do not preclude the presence, or addition of one or more other features, operations, elements, components, items, species, and/or groups thereof. The terms "or" and/or "as used herein are to be construed as inclusive or meaning any one or any combination. Thus, "A, B or C" or "A, B and/or C" means "any of the following: a; b; c; a and B; a and C; b and C; A. b and C ". An exception to this definition will occur only when a combination of elements, functions or operations are inherently mutually exclusive in some way.
In the prior art, most of man-machine interaction systems which rely on physical sensors have the defects that certain lag problems can be caused and real-time and continuous identification and prediction cannot be realized, and the human-machine interaction systems use the electromyographic signals to identify and predict the motion of lower limbs far behind the motion pattern of upper limbs in the aspect of electromyographic signals which are ahead of the motion of a human body, particularly in the aspect of continuous prediction.
Therefore, the application provides a multi-modal intention recognition and motion prediction method, which is used for solving the defects that a human-computer interaction system depending on a physical sensor in the prior art cannot continuously recognize and predict in real time due to the fact that a certain lag problem is generated, and for the aspect of myoelectric signals ahead of human body actions, the recognition and prediction of lower limb motions by using the myoelectric signals are far behind the recognition and prediction of motion patterns of upper limbs. The method and the device accurately classify and predict several typical gaits of people, so that the walking mode can be recognized in time, and the motion trail can be recognized and predicted continuously in real time.
The method comprises the following steps:
collecting electromyographic signals and joint angle values of two legs of a wearer in different states, and preprocessing the electromyographic signals and the joint angle values;
extracting gait feature vectors according to the preprocessed electromyographic signals and the angle values of all joints;
constructing a movement intention recognition model according to the gait feature vector to predict a movement intention; and/or constructing a joint angle identification model according to the gait feature vector to predict a joint angle.
Fig. 1 is a schematic flow chart illustrating a multi-step intention recognition and motion prediction method in an embodiment of the present application.
The method comprises the following steps:
s101: collecting electromyographic signals and joint angle values of two legs of a wearer in different states, and preprocessing the electromyographic signals and the joint angle values.
Optionally, the myoelectric signals and the joint angle values of the exoskeleton wearer under different gaits are collected and preprocessed respectively.
Optionally, the electromyographic signals of the exoskeleton wearer under multiple channels under different gaits and the angle values of all joints are collected in real time.
Optionally, denoising is performed on the electromyographic signal, and smoothing is performed on the joint angle value.
Optionally, the electromyographic signal acquisition is acquired by a surface electromyographic signal sensor mounted at a muscle belly position of a skin surface layer muscle capable of functioning in different walking modes.
Optionally, the joint angles are measured by an angle sensor placed near the joint or an angle sensor built in the motor.
S102: extracting gait feature vectors according to the preprocessed electromyographic signals and the angle values of all joints;
optionally, a gait feature vector including a plurality of feature values is extracted according to the processed electromyographic signals and the angle values of each joint.
Optionally, each feature vector of the feature vectors extracted in the feature value extraction is a multidimensional feature vector composed of an integral value of an electromyographic signal of each acquisition channel of each frame of data in the electromyographic signal, an electromyographic signal state value of the electromyographic signal of each acquisition channel, and an angle value of three joints of the right leg corresponding to the first sampling point in the frame of data.
Optionally, the electromyographic signal state value indicates that the electromyographic signal state is an active state of a muscle, and after the endpoint detection is performed on the surface electromyographic signal, the electromyographic signal state between the starting endpoint and the ending endpoint is defined as a "1" value, and the other electromyographic relaxed states are defined as "0" values.
Alternatively, the electromyographic signal status is determined based on using an electromyographic signal integral value and a zero crossing number.
Optionally, the electromyographic signal state is an active state of a muscle, wherein the electromyographic signal state of each acquisition channel of a frame of data is extracted by performing endpoint detection on the electromyographic signal.
Optionally, the endpoint detection method detects the myoelectric signal activity starting time and the activity ending time.
Optionally, the detection condition of the active start time is as follows: the integral value of the electromyographic signals is larger than the maximum integral value, and the zero crossing number is larger than or equal to the zero crossing threshold value of the electromyographic signals in a static and easy state.
Optionally, the detection condition of the active end time is as follows: the integral value of the electromyographic signals is smaller than the maximum integral value, and the zero-crossing number is smaller than a zero-crossing threshold value in the static electricity relaxation state.
Optionally, the zero-crossing threshold value is an absolute value with a larger maximum value and a larger minimum value of the electromyographic signals in the static relaxation state.
Optionally, the calculation formula of the number of zero crossings of the ith frame is as follows:
Figure BDA0002275209560000051
and calculating the larger absolute value of the maximum value and the minimum value of the electromyographic signals in the static relaxation state as Th, wherein the larger absolute value is used as a threshold value for calculating the zero-crossing number, and N is a positive integer from 1 to N.
Optionally, the maximum electromyographic integral value of the electromyographic signal in the static relaxed state is used as a detection threshold, a suitable zero-crossing threshold is set according to the zero-crossing number of the electromyographic signal in the static relaxed state, when the electromyographic integral value and the zero-crossing number of a certain frame are greater than or equal to the set threshold and continue for a period of time, the frame is considered as the electromyographic signal activity starting time, and when the electromyographic integral value and the zero-crossing number of the certain frame are less than the set threshold and continue for a period of time, the frame is considered as the electromyographic signal activity ending time.
Optionally, a calculation formula of the integral value of the i-th frame electromyographic signal is as follows:
Figure BDA0002275209560000061
where, N is a sampling point, xi (N) is an electromyographic signal of the nth sampling point of the ith frame.
Optionally, a formula for calculating the zero-crossing number of the ith frame is as follows:
Figure BDA0002275209560000062
ZCR (i) is the zero crossing number of the nth sampling point of the ith frame.
S103: constructing a movement intention recognition model according to the gait feature vector to predict a movement intention; and/or constructing a joint angle identification model according to the gait feature vector to predict a joint angle.
Optionally, a motion intention recognition model is constructed according to the gait feature vectors to predict the motion intention.
Optionally, a joint angle recognition model is constructed according to the gait feature vector to predict the joint angle.
Optionally, a motion intention recognition model is built according to the gait feature vectors to predict motion intentions, and a joint angle recognition model is built according to the gait feature vectors to predict joint angles.
Optionally, the training of the movement intention recognition model is performed by using a support vector machine which uses a gaussian kernel function and has good accuracy and universality for lower limb movement prediction based on surface electromyogram signals, and a training data set consisting of an electromyogram integral value of a multidimensional feature vector and a corresponding movement state phase label is used for training the movement intention recognition model on the gaussian kernel support vector machine.
Optionally, the joint angle prediction model training uses a linear kernel support vector machine to train the joint angle prediction model, and the feature vectors belonging to the same gait phase and the corresponding joint angles form a data set of each gait phase to train the joint angle prediction model of the linear kernel support vector machine.
The following examples are given in conjunction with
Example 1:
a real-time lower limb multi-gait intention identification and motion prediction method aims at the motion state that a wearer of an adult man walks forward at a constant speed of 6KM/h on a horizontally placed running machine to identify gait phases and predict joint angle values of hip joints, knee joints and ankle joints of right legs.
The specific implementation steps are as follows:
firstly, collecting surface electromyographic signals of the muscle belly positions of extensor digitorum longus, gastrocnemius inner side, tibialis anterior muscle, rectus femoris, vastus lateralis, vastus medialis and biceps femoris and angle values of hip joint, knee joint and ankle joint under the motion state that a wearer walks forwards at a constant speed of 6KM/h on a horizontally placed running machine, and carrying out noise reduction on the obtained surface electromyographic signals and smoothing the angle values of the joints.
And secondly, performing characteristic extraction on the obtained surface electromyographic signal data subjected to noise reduction and joint angle data to obtain characteristic values and forming a characteristic vector. The characteristic vector consists of the electromyographic integral value of the surface electromyographic signal of each channel, the electromyographic signal state of the electromyographic signal of each acquisition channel and the angle value of three joints at the current moment.
And thirdly, predicting the movement intention based on the trained movement intention recognition model, wherein the characteristic vector and the gait phase category corresponding to the characteristic vector are input into a Gaussian kernel support vector machine to carry out movement intention recognition model training, and the gait is divided into four phases in the jogging state, namely a swing prophase numbered 1, a swing later phase numbered 2, a support prophase numbered 3 and a support later phase numbered 4.
M1, M11, M12, M13, M21, M22, M23, M31, M32, M33, M41, M42, M43 are stored, and the maximum myoelectric integration value and the average zero-crossing number of the surface myoelectric signals of the respective positions in the standing still relaxed state are stored.
In principle similarity to the above-described embodiments, the present application provides a multi-modal intent recognition and motion prediction system, the system comprising:
the preprocessing module is used for acquiring electromyographic signals and joint angle values of two legs of a wearer in different states and preprocessing the electromyographic signals and the joint angle values;
the characteristic vector acquisition module is coupled with the preprocessing module and used for extracting characteristic values from the preprocessed electromyographic signals and the angle values of all joints;
a motion intent prediction module, coupled to the feature vector acquisition module, for predicting a motion intent based on the trained motion intent recognition model;
and the joint angle prediction module is coupled with the characteristic vector acquisition module and used for predicting the joint angle based on the trained joint angle recognition model.
Specific embodiments are provided below in conjunction with the attached figures:
fig. 2 is a schematic structural diagram illustrating a multi-step intention recognition and motion prediction system in an embodiment of the present application.
The system comprises:
the preprocessing module 21 is used for acquiring electromyographic signals and joint angle values of two legs of a wearer in different states and preprocessing the electromyographic signals and the joint angle values;
the characteristic vector acquisition module 22 is coupled to the preprocessing module 21 and is configured to extract gait characteristic vectors according to the preprocessed electromyographic signals and the angle values of the joints;
a motion intention predicting module 23, coupled to the feature vector obtaining module 22, configured to construct a motion intention identification model according to the gait feature vectors, so as to predict a motion intention; and/or constructing a joint angle identification model according to the gait feature vector to predict a joint angle.
Optionally, the electromyographic signals of the exoskeleton wearer under multiple channels under different gaits and the angle values of all joints are collected in real time.
Optionally, denoising is performed on the electromyographic signal, and smoothing is performed on the joint angle value.
Optionally, the electromyographic signal acquisition is acquired by a surface electromyographic signal sensor mounted at a muscle belly position of a skin surface layer muscle capable of functioning in different walking modes.
Optionally, the joint angles are measured by an angle sensor placed near the joint or an angle sensor built in the motor.
Optionally, a gait feature vector including a plurality of feature values is extracted according to the processed electromyographic signals and the angle values of each joint.
Optionally, each feature vector of the feature vectors extracted in the feature value extraction is a multidimensional feature vector composed of an integral value of an electromyographic signal of each acquisition channel of each frame of data in the electromyographic signal, an electromyographic signal state value of the electromyographic signal of each acquisition channel, and an angle value of three joints of the right leg corresponding to the first sampling point in the frame of data.
Optionally, the electromyographic signal state value indicates that the electromyographic signal state is an active state of a muscle, and after the endpoint detection is performed on the surface electromyographic signal, the electromyographic signal state between the starting endpoint and the ending endpoint is defined as a "1" value, and the other electromyographic relaxed states are defined as "0" values.
Alternatively, the electromyographic signal status is determined based on using an electromyographic signal integral value and a zero crossing number.
Optionally, the electromyographic signal state is an active state of a muscle, wherein the electromyographic signal state of each acquisition channel of a frame of data is extracted by performing endpoint detection on the electromyographic signal.
Optionally, the endpoint detection method detects the myoelectric signal activity starting time and the activity ending time.
Optionally, the detection condition of the active start time is as follows: the integral value of the electromyographic signals is larger than the maximum integral value, and the zero crossing number is larger than or equal to the zero crossing threshold value of the electromyographic signals in a static and easy state.
Optionally, the detection condition of the active end time is as follows: the integral value of the electromyographic signals is smaller than the maximum integral value, and the zero-crossing number is smaller than a zero-crossing threshold value in the static electricity relaxation state.
Optionally, the zero-crossing threshold value is an absolute value with a larger maximum value and a larger minimum value of the electromyographic signals in the static relaxation state.
Optionally, the calculation formula of the number of zero crossings of the ith frame is as follows:
Figure BDA0002275209560000091
and calculating the larger absolute value of the maximum value and the minimum value of the electromyographic signals in the static relaxation state as Th, wherein the larger absolute value is used as a threshold value for calculating the zero-crossing number, and N is a positive integer from 1 to N.
Optionally, the maximum electromyographic integral value of the electromyographic signal in the static relaxed state is used as a detection threshold, a suitable zero-crossing threshold is set according to the zero-crossing number of the electromyographic signal in the static relaxed state, when the electromyographic integral value and the zero-crossing number of a certain frame are greater than or equal to the set threshold and continue for a period of time, the frame is considered as the electromyographic signal activity starting time, and when the electromyographic integral value and the zero-crossing number of the certain frame are less than the set threshold and continue for a period of time, the frame is considered as the electromyographic signal activity ending time.
Optionally, a calculation formula of the integral value of the i-th frame electromyographic signal is as follows:
Figure BDA0002275209560000092
where, N is a sampling point, xi (N) is an electromyographic signal of the nth sampling point of the ith frame.
Optionally, a formula for calculating the zero-crossing number of the ith frame is as follows:
Figure BDA0002275209560000093
ZCR (i) is the zero crossing number of the nth sampling point of the ith frame.
Optionally, a motion intention recognition model is constructed according to the gait feature vectors to predict the motion intention.
Optionally, a joint angle recognition model is constructed according to the gait feature vector to predict the joint angle.
Optionally, a motion intention recognition model is built according to the gait feature vectors to predict motion intentions, and a joint angle recognition model is built according to the gait feature vectors to predict joint angles.
Optionally, the training of the movement intention recognition model is performed by using a support vector machine which uses a gaussian kernel function and has good accuracy and universality for lower limb movement prediction based on surface electromyogram signals, and a training data set consisting of an electromyogram integral value of a multidimensional feature vector and a corresponding movement state phase label is used for training the movement intention recognition model on the gaussian kernel support vector machine.
Optionally, the joint angle prediction model training uses a linear kernel support vector machine to train the joint angle prediction model, and the feature vectors belonging to the same gait phase and the corresponding joint angles form a data set of each gait phase to train the joint angle prediction model of the linear kernel support vector machine.
As shown in fig. 3, a schematic structural diagram of a multi-modal intent recognition and motion prediction terminal 30 in the embodiment of the present application is shown.
The multi-modal intention recognition and motion prediction terminal 30 includes: a memory 31 and a processor 32, the memory 31 being used for storing computer programs; the processor 32 runs a computer program to implement the multi-modal intent recognition and motion prediction method as described in fig. 1.
Optionally, the number of the memories 31 may be one or more, the number of the processors 32 may be one or more, and fig. 3 illustrates one example.
Optionally, the processor 32 in the multi-step intention recognition and motion prediction terminal 30 may load one or more instructions corresponding to the process of the application program into the memory 31 according to the steps shown in fig. 2, and the processor 32 runs the application program stored in the memory 31, so as to implement various functions in the multi-step intention recognition and motion prediction method shown in fig. 1.
Optionally, the memory 31 may include, but is not limited to, a high speed random access memory, a non-volatile memory. Such as one or more magnetic disk storage devices, flash memory devices, or other non-volatile solid-state storage devices; the Processor 32 may include, but is not limited to, a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
Optionally, the Processor 32 may be a general-purpose Processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, a discrete Gate or transistor logic device, or a discrete hardware component.
The present application also provides a computer-readable storage medium storing a computer program which, when executed, implements the multi-modal intent recognition and motion prediction method as shown in fig. 1. The computer-readable storage medium may include, but is not limited to, floppy diskettes, optical disks, CD-ROMs (compact disc-read only memories), magneto-optical disks, ROMs (read-only memories), RAMs (random access memories), EPROMs (erasable programmable read only memories), EEPROMs (electrically erasable programmable read only memories), magnetic or optical cards, flash memory, or other type of media/machine-readable medium suitable for storing machine-executable instructions. The computer readable storage medium may be a product that is not accessed by the computer device or may be a component that is used by an accessed computer device.
In conclusion, the method, the system, the terminal and the medium for recognizing the multi-gait intention and predicting the movement solve the problem that the lower limb assistance exoskeleton robot in the prior art cannot recognize and predict in real time in the movement process. Therefore, the application effectively overcomes various defects in the prior art and has high industrial utilization value.
The above embodiments are merely illustrative of the principles and utilities of the present application and are not intended to limit the application. Any person skilled in the art can modify or change the above-described embodiments without departing from the spirit and scope of the present application. Accordingly, it is intended that all equivalent modifications or changes which can be made by those skilled in the art without departing from the spirit and technical concepts disclosed in the present application shall be covered by the claims of the present application.

Claims (10)

1. A multi-modal intent recognition and motion prediction method, comprising:
collecting electromyographic signals and joint angle values of two legs of a wearer in different states, and preprocessing the electromyographic signals and the joint angle values;
extracting gait feature vectors according to the preprocessed electromyographic signals and the angle values of all joints;
constructing a movement intention recognition model according to the gait feature vector to predict a movement intention; and/or constructing a joint angle identification model according to the gait feature vector to predict a joint angle.
2. The multi-modal intent recognition and motion prediction method of claim 1, wherein the gait feature vector comprises: the electromyographic signal integral value of each acquisition channel of each frame of data in the electromyographic signals, the electromyographic signal state value of each acquisition channel and the angle value of the three joints of the right leg corresponding to the first sampling point in the frame of data.
3. The multi-step intention recognition and movement prediction method according to claim 2, wherein a gaussian kernel support vector machine is trained using the integrated value and the myoelectric state value to construct the movement intention recognition model.
4. The multi-gait intention recognition and motion prediction method according to claim 2, wherein a linear kernel support vector machine is trained using the integral value and the myoelectric state value to construct the joint angle recognition model.
5. The multi-modal intent recognition and motion prediction method of claim 1, wherein pre-processing the electromyographic signals and the respective joint angle values comprises: denoising the electromyographic signals and smoothing the angle values.
6. The multi-stateful intent recognition and motion prediction method of claim 4, wherein the electromyographic signal state is associated with the electromyographic signal integral value and a zero crossing number.
7. A multi-modal intent recognition and motion prediction system, comprising:
the preprocessing module is used for acquiring electromyographic signals and joint angle values of two legs of a wearer in different states and preprocessing the electromyographic signals and the joint angle values;
the characteristic vector acquisition module is coupled with the preprocessing module and used for extracting gait characteristic vectors according to the preprocessed electromyographic signals and the angle values of all joints;
the prediction module is coupled with the characteristic vector acquisition module and used for constructing a movement intention recognition model according to the gait characteristic vector so as to predict a movement intention; and/or constructing a joint angle identification model according to the gait feature vector to predict a joint angle.
8. The multi-modal intent recognition and motion prediction method of claim 7, wherein the gait feature vector comprises: the method comprises the steps of obtaining the electromyographic signal integral value of each acquisition channel of frame data, the electromyographic signal state value of each acquisition channel and the angle values of three joints of the right leg corresponding to a first sampling point in the frame data.
9. A multi-modal intent recognition and motion prediction terminal, comprising:
a memory for storing a computer program;
a processor for running the computer program to perform the multi-modal intent recognition and motion prediction method of any of claims 1-6.
10. A computer storage medium, in which a computer program and a second computer program are stored, wherein the computer program when executed implements a multi-modal intent recognition and motion prediction method according to any one of claims 1 to 6.
CN201911120066.3A 2019-11-15 2019-11-15 Multi-step intention recognition and motion prediction method, system, terminal and medium Active CN112807001B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911120066.3A CN112807001B (en) 2019-11-15 2019-11-15 Multi-step intention recognition and motion prediction method, system, terminal and medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911120066.3A CN112807001B (en) 2019-11-15 2019-11-15 Multi-step intention recognition and motion prediction method, system, terminal and medium

Publications (2)

Publication Number Publication Date
CN112807001A true CN112807001A (en) 2021-05-18
CN112807001B CN112807001B (en) 2024-06-04

Family

ID=75851703

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911120066.3A Active CN112807001B (en) 2019-11-15 2019-11-15 Multi-step intention recognition and motion prediction method, system, terminal and medium

Country Status (1)

Country Link
CN (1) CN112807001B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113425290A (en) * 2021-06-15 2021-09-24 燕山大学 Joint coupling time sequence calculation method for human body rhythm movement
CN114504322A (en) * 2022-01-27 2022-05-17 北京体育大学 Method, system and storage medium for predicting training effect of lower limb muscle strength
CN116725556A (en) * 2023-07-12 2023-09-12 中国科学院苏州生物医学工程技术研究所 Motion intention recognition method and device based on surface electromyographic signals
CN117281667A (en) * 2023-11-09 2023-12-26 浙江强脑科技有限公司 Motion pattern recognition method and device, intelligent artificial limb, terminal and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104799854A (en) * 2015-04-29 2015-07-29 深圳大学 Surface myoelectricity acquisition device and myoelectricity signal processing method thereof
CN105615890A (en) * 2015-12-24 2016-06-01 西安交通大学 Angle and myoelectricity continuous decoding method for human body lower limb walking joint
CN105892676A (en) * 2016-04-26 2016-08-24 中国科学院自动化研究所 Human-machine interaction device, system and method of vascular intervention operation wire feeder
CN106955111A (en) * 2017-04-21 2017-07-18 海南大学 Brain paralysis youngster's gait recognition method based on surface electromyogram signal
CN107016233A (en) * 2017-03-14 2017-08-04 中国科学院计算技术研究所 The association analysis method and system of motor behavior and cognitive ability
CN107397649A (en) * 2017-08-10 2017-11-28 燕山大学 A kind of upper limbs exoskeleton rehabilitation robot control method based on radial base neural net
US20180235831A1 (en) * 2017-02-21 2018-08-23 Samsung Electronics Co., Ltd. Method and apparatus for walking assistance
CN108874149A (en) * 2018-07-28 2018-11-23 华中科技大学 A method of continuously estimating human synovial angle based on surface electromyogram signal
CN109276245A (en) * 2018-11-01 2019-01-29 重庆中科云丛科技有限公司 A kind of surface electromyogram signal characteristic processing and joint angles prediction technique and system
WO2019095055A1 (en) * 2017-11-15 2019-05-23 Uti Limited Partnership Method and system utilizing pattern recognition for detecting atypical movements during physical activity

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104799854A (en) * 2015-04-29 2015-07-29 深圳大学 Surface myoelectricity acquisition device and myoelectricity signal processing method thereof
CN105615890A (en) * 2015-12-24 2016-06-01 西安交通大学 Angle and myoelectricity continuous decoding method for human body lower limb walking joint
CN105892676A (en) * 2016-04-26 2016-08-24 中国科学院自动化研究所 Human-machine interaction device, system and method of vascular intervention operation wire feeder
US20180235831A1 (en) * 2017-02-21 2018-08-23 Samsung Electronics Co., Ltd. Method and apparatus for walking assistance
CN107016233A (en) * 2017-03-14 2017-08-04 中国科学院计算技术研究所 The association analysis method and system of motor behavior and cognitive ability
CN106955111A (en) * 2017-04-21 2017-07-18 海南大学 Brain paralysis youngster's gait recognition method based on surface electromyogram signal
CN107397649A (en) * 2017-08-10 2017-11-28 燕山大学 A kind of upper limbs exoskeleton rehabilitation robot control method based on radial base neural net
WO2019095055A1 (en) * 2017-11-15 2019-05-23 Uti Limited Partnership Method and system utilizing pattern recognition for detecting atypical movements during physical activity
CN108874149A (en) * 2018-07-28 2018-11-23 华中科技大学 A method of continuously estimating human synovial angle based on surface electromyogram signal
CN109276245A (en) * 2018-11-01 2019-01-29 重庆中科云丛科技有限公司 A kind of surface electromyogram signal characteristic processing and joint angles prediction technique and system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
XIAO JINZHUANG: "A surface electromyography-based pre-impact fall detection method", 2018 CHINESE AUTOMATION CONGRESS, 2 December 2018 (2018-12-02), pages 681 - 685 *
夏春雨: "上肢高密度sEMG特征分类研究", 信息科技, 15 January 2019 (2019-01-15), pages 1 - 69 *
骆无意: "人体下肢姿态及运动状态预测", 中国优秀硕士学位论文全文数据库工程科技Ⅱ辑, no. 2019, pages 030 - 91 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113425290A (en) * 2021-06-15 2021-09-24 燕山大学 Joint coupling time sequence calculation method for human body rhythm movement
CN114504322A (en) * 2022-01-27 2022-05-17 北京体育大学 Method, system and storage medium for predicting training effect of lower limb muscle strength
CN114504322B (en) * 2022-01-27 2023-10-27 北京体育大学 Training effect prediction method, system and storage medium for muscle strength of lower limb
CN116725556A (en) * 2023-07-12 2023-09-12 中国科学院苏州生物医学工程技术研究所 Motion intention recognition method and device based on surface electromyographic signals
CN117281667A (en) * 2023-11-09 2023-12-26 浙江强脑科技有限公司 Motion pattern recognition method and device, intelligent artificial limb, terminal and storage medium
CN117281667B (en) * 2023-11-09 2024-04-09 浙江强脑科技有限公司 Motion pattern recognition method and device, intelligent artificial limb, terminal and storage medium

Also Published As

Publication number Publication date
CN112807001B (en) 2024-06-04

Similar Documents

Publication Publication Date Title
CN112807001A (en) Multi-modal intent recognition and motion prediction method, system, terminal, and medium
CN109953761B (en) Lower limb rehabilitation robot movement intention reasoning method
Xiong et al. Deep learning for EMG-based human-machine interaction: A review
CN110537922B (en) Human body walking process lower limb movement identification method and system based on deep learning
CN106067178B (en) A kind of continuous estimation method of hand joint movement based on muscle synergistic activation model
CN112754468A (en) Human body lower limb movement detection and identification method based on multi-source signals
CN110755070A (en) Multi-sensor fusion-based lower limb movement pose rapid prediction system and method
CN113043248B (en) Transportation and assembly whole-body exoskeleton system based on multi-source sensor and control method
Lee et al. Image transformation and CNNs: A strategy for encoding human locomotor intent for autonomous wearable robots
Ma et al. Design on intelligent perception system for lower limb rehabilitation exoskeleton robot
CN111898487A (en) Human motion mode real-time identification method of flexible exoskeleton system
Liu et al. sEMG-based continuous estimation of knee joint angle using deep learning with convolutional neural network
Liu et al. Metric learning for robust gait phase recognition for a lower limb exoskeleton robot based on sEMG
Rabe et al. Evaluating electromyography and sonomyography sensor fusion to estimate lower-limb kinematics using gaussian process regression
Zheng et al. A GMM-DTW-based locomotion mode recognition method in lower limb exoskeleton
CN113274039B (en) Prediction classification method and device based on surface electromyogram signals and motion signals
CN112487902B (en) Exoskeleton-oriented gait phase classification method based on TCN-HMM
Ma et al. A real-time gait switching method for lower-limb exoskeleton robot based on sEMG signals
CN112560594A (en) Human body gait recognition method of flexible exoskeleton system
Zhang et al. A real-time gait phase recognition method based on multi-information fusion
CN116115217A (en) Human lower limb gait phase estimation method based on depth network
Tong et al. BP-AR-based human joint angle estimation using multi-channel sEMG
Wang et al. Terrain recognition and gait cycle prediction using imu
CN115281657A (en) Human body gait recognition method of flexible exoskeleton system
Chen et al. An adaptive gait learning strategy for lower limb exoskeleton robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant