CN112123332A - Construction method of gesture classifier, exoskeleton robot control method and device - Google Patents

Construction method of gesture classifier, exoskeleton robot control method and device Download PDF

Info

Publication number
CN112123332A
CN112123332A CN202010799377.3A CN202010799377A CN112123332A CN 112123332 A CN112123332 A CN 112123332A CN 202010799377 A CN202010799377 A CN 202010799377A CN 112123332 A CN112123332 A CN 112123332A
Authority
CN
China
Prior art keywords
signal
gesture
electromyographic
electromyographic signals
signals
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010799377.3A
Other languages
Chinese (zh)
Inventor
李红红
姚秀军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Haiyi Tongzhan Information Technology Co Ltd
Original Assignee
Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Haiyi Tongzhan Information Technology Co Ltd filed Critical Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority to CN202010799377.3A priority Critical patent/CN112123332A/en
Publication of CN112123332A publication Critical patent/CN112123332A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0006Exoskeletons, i.e. resembling a human figure
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention relates to a construction method of a gesture classifier, and a control method and a device of an exoskeleton robot, wherein the control method of the exoskeleton robot comprises the following steps: collecting an electromyographic signal through an electromyographic signal collecting module; converting the active segment of the electromyographic signal to obtain a corresponding frequency spectrum image; inputting the frequency spectrum image to a gesture classifier to obtain a gesture action corresponding to the electromyographic signal; generating control instructions to control the exoskeleton robot based on the gesture actions, and controlling exoskeleton motions of the exoskeleton robot based on the control instructions. Therefore, the characteristic of the electromyographic signals can be efficiently and accurately extracted, and subsequent classification and identification of the electromyographic signals are realized.

Description

Construction method of gesture classifier, exoskeleton robot control method and device
Technical Field
The embodiment of the invention relates to the technical field of robots, in particular to a construction method of a gesture classifier, and a control method and device of an exoskeleton robot.
Background
The myoelectric signal is the superposition of action potentials of a movement unit in a plurality of myofibers on time and space, and can reflect the activity of neuromuscular to a certain extent, so the myoelectric signal has important practical value in the fields of clinical medicine, human-computer efficiency, rehabilitation medicine, sports science and the like. Among them, in the rehabilitation medicine field, it is most common to control the motion of the prosthesis by the electromyographic signals of the muscles of the human body. In application, the electromyographic signals of human muscles need to be subjected to feature extraction to identify the action intention of the human body based on the extracted features to realize the function.
At present, with the continuous development of sensor technology, the quality of the collected electromyographic signals is higher and higher, and therefore researchers need to improve the feature extraction and classification identification method of the electromyographic signals. Aiming at feature extraction of electromyographic signals, researchers mainly study the aspects of designing new features, combining different features, improving the existing feature dimension reduction method and the like; for classification and identification of electromyographic signals, researchers mainly study the classification and identification of electromyographic signals in terms of optimizing classifier parameters.
However, the feature design process itself is cumbersome and there are now a large number of features that have been experimentally verified, making it very difficult to design new features; combining different features and improving existing feature dimension reduction methods also requires significant time and effort by researchers. Meanwhile, after years of research by researchers, universal classification models have been tried out, and it is very difficult to select better parameters for the classification models. Therefore, the feature extraction and classification identification of the electromyographic signals become a difficult problem to be broken through urgently.
Disclosure of Invention
In view of the above, in order to solve the technical problem that feature extraction and classification recognition of an electromyographic signal are difficult, embodiments of the present invention provide a method for constructing a gesture classifier, and a method and an apparatus for controlling an exoskeleton robot.
In a first aspect, an embodiment of the present invention provides a method for constructing a gesture classifier, where the method includes:
obtaining at least one group of corresponding relations containing gesture actions and electromyographic signals of a user in the gesture action process;
respectively converting the active sections of the electromyographic signals in each group of corresponding relations to obtain a training sample set, wherein each training sample takes a frequency spectrum image obtained by conversion as an input value and takes the gesture as a label value;
and training the convolutional neural network based on the training sample set to obtain a gesture classifier, wherein the gesture classifier takes the frequency spectrum image as an input value and takes the gesture action as an output value.
In one possible embodiment, the transforming the active segments of the electromyographic signals in each set of the corresponding relations respectively includes:
and respectively transforming the active segments of the electromyographic signals in each group of corresponding relations by adopting a continuous wavelet transform algorithm to obtain corresponding frequency spectrum images.
In a possible embodiment, the transforming the active segments of the electromyographic signals in each set of the corresponding relations respectively comprises:
for each group of corresponding relations, grouping myoelectric signals in the corresponding relations based on acquisition time to obtain a plurality of signal groups;
and respectively converting the active segment of the electromyographic signals in each signal group to obtain corresponding frequency spectrum images.
In a possible embodiment, the grouping the electromyographic signals in the correspondence relationship based on the acquisition time to obtain a plurality of signal groups includes:
dividing a preset acquisition time period into a plurality of time windows according to a preset window length; each two adjacent time windows have an overlapping part;
grouping the electromyographic signals in the corresponding relation based on each time window to obtain a plurality of signal groups; the acquisition time of the electromyographic signals in the same signal group falls into the same time window, and different signal groups belong to different time windows.
In a possible embodiment, the obtaining at least one group of correspondence relationships including a gesture motion and an electromyographic signal of a user when the user makes the gesture motion includes:
and acquiring at least one group of corresponding relations, acquired according to a preset acquisition frequency, of the electromyographic signal acquisition module, wherein the corresponding relations comprise the gesture actions and the electromyographic signals of the user when the user makes the gesture actions, in the process that the user makes the gesture actions.
In a possible embodiment, before the separately transforming the active segments of the electromyographic signals in each set of the corresponding relations, the method further comprises:
extracting an envelope signal of the electromyographic signals aiming at each electromyographic signal in each group of the corresponding relation;
comparing the extracted envelope signals with a preset threshold value in sequence, and determining a first envelope signal larger than the preset threshold value as a starting point signal of an active segment when the first envelope signal is determined from the extracted envelope signals for the first time;
when determining that a second envelope signal after the first envelope signal is not larger than the preset threshold value, determining the second envelope signal as a termination point signal of an active segment;
and determining a signal section from the starting point signal to the ending point signal in the electromyographic signals as an active section.
In a second aspect, an embodiment of the present invention provides an exoskeleton robot control method based on a gesture classifier as described in any one of the above, the method including:
collecting an electromyographic signal through an electromyographic signal collecting module;
converting the active segment of the electromyographic signal to obtain a corresponding frequency spectrum image;
inputting the frequency spectrum image to the gesture classifier to obtain gesture actions corresponding to the electromyographic signals;
generating control instructions to control the exoskeleton robot based on the gesture actions, and controlling exoskeleton motions of the exoskeleton robot based on the control instructions.
In a possible embodiment, the transforming the active segment of the electromyographic signal to obtain a corresponding spectrum image includes:
and transforming the active segment of the electromyographic signals by adopting a continuous wavelet transform algorithm or a Fourier transform algorithm to obtain a corresponding frequency spectrum image.
In a third aspect, an embodiment of the present invention provides a device for constructing a gesture classifier, where the device includes:
the signal acquisition module is used for acquiring the corresponding relation between at least one group of gesture actions and the electromyographic signals of the user in the gesture action process;
the signal conversion module is used for respectively converting the active sections of the electromyographic signals in each group of corresponding relations to obtain a training sample set, wherein each training sample takes the frequency spectrum image obtained by conversion as an input value and takes the gesture action as a label value;
and the model training module is used for training the convolutional neural network based on the training sample set to obtain a gesture classifier, and the gesture classifier takes the frequency spectrum image as an input value and takes gesture motion as an output value.
In a possible embodiment, the signal transformation module transforms the active segments of the electromyographic signals in each set of the corresponding relations, respectively, and includes:
and respectively transforming the active segments of the electromyographic signals in each group of corresponding relations by adopting a continuous wavelet transform algorithm to obtain corresponding frequency spectrum images.
In a possible embodiment, the signal transformation module transforms the active segments of the electromyographic signals in each set of the corresponding relations, respectively, and includes:
for each group of corresponding relations, grouping myoelectric signals in the corresponding relations based on acquisition time to obtain a plurality of signal groups;
and respectively converting the active segment of the electromyographic signals in each signal group to obtain corresponding frequency spectrum images.
In a possible embodiment, the signal transformation module groups the electromyographic signals in the correspondence relationship based on the acquisition time to obtain a plurality of signal groups, and the method includes:
dividing a preset acquisition time period into a plurality of time windows according to a preset window length; each two adjacent time windows have an overlapping part;
grouping the electromyographic signals in the corresponding relation based on each time window to obtain a plurality of signal groups; the acquisition time of the electromyographic signals in the same signal group falls into the same time window, and different signal groups belong to different time windows.
In a possible embodiment, the obtaining, by the signal obtaining module, at least one group of correspondences including a gesture action and an electromyographic signal of the user when the user makes the gesture action includes:
and acquiring at least one group of corresponding relations, acquired according to a preset acquisition frequency, of the electromyographic signal acquisition module, wherein the corresponding relations comprise the gesture actions and the electromyographic signals of the user when the user makes the gesture actions, in the process that the user makes the gesture actions.
In a possible embodiment, the apparatus further comprises:
an envelope signal extraction module, configured to extract an envelope signal of the electromyographic signal for each electromyographic signal in each group of the corresponding relationship;
the comparison module is used for comparing the extracted envelope signals with a preset threshold value in sequence, and when a first envelope signal which is larger than the preset threshold value is determined from the extracted envelope signals for the first time, the first envelope signal is determined as a starting point signal of the active segment; when determining that a second envelope signal after the first envelope signal is not larger than the preset threshold value, determining the second envelope signal as a termination point signal of an active segment;
and the activity section determining module is used for determining a signal section from the starting point signal to the end point signal in the electromyographic signals as an activity section.
In a fourth aspect, embodiments of the present invention provide an exoskeleton robot control apparatus, the apparatus comprising:
the signal acquisition module is used for acquiring the electromyographic signals through the electromyographic signal acquisition module;
the signal conversion module is used for converting the active segment of the electromyographic signal to obtain a corresponding frequency spectrum image;
the gesture recognition module is used for inputting the frequency spectrum image to the gesture classifier to obtain a gesture action corresponding to the electromyographic signal;
a control module to generate control instructions to control the exoskeleton robot based on the gesture motion and to control exoskeleton motions of the exoskeleton robot based on the control instructions.
In a possible embodiment, the signal transformation module transforms an active segment of an electromyographic signal to obtain a corresponding spectrum image, and includes:
and transforming the active segment of the electromyographic signals by adopting a continuous wavelet transform algorithm or a Fourier transform algorithm to obtain a corresponding frequency spectrum image.
In a fifth aspect, an embodiment of the present invention provides an electronic device, including: a processor and a memory, wherein the processor is configured to execute a construction program of the gesture classifier or an exoskeleton robot control program stored in the memory to implement the construction method of the gesture classifier according to the first aspect or the exoskeleton robot control method according to the second aspect.
In a sixth aspect, an embodiment of the present invention provides a storage medium storing one or more programs, where the one or more programs are executable by one or more processors to implement the method for constructing a gesture classifier according to the first aspect or the method for controlling an exoskeleton robot according to the second aspect.
The method for constructing the gesture classifier comprises the steps of obtaining at least one group of corresponding relations containing gesture actions and electromyographic signals of users when the users make the gesture actions, respectively transforming the activity sections of the electromyographic signals in each group of corresponding relations to obtain a training sample set, wherein each training sample takes a frequency spectrum image obtained through transformation as an input value, takes the gesture actions as a label value, trains a convolutional neural network based on the training sample set, and can obtain the gesture classifier which takes the frequency spectrum image as the input value and takes the gesture actions as an output value.
According to the exoskeleton robot control method provided by the embodiment of the invention, the electromyographic signals are collected through the electromyographic signal collection module, the moving segments of the electromyographic signals are converted to obtain the corresponding frequency spectrum images, the frequency spectrum images are input to the gesture classifier to obtain the control instructions for controlling the exoskeleton robot by the gesture actions corresponding to the electromyographic signals based on the gesture actions generated by the gesture actions, and the exoskeleton motions of the exoskeleton robot are controlled based on the control instructions, so that the action intentions of users can be determined according to the electromyographic signals of the users, and the exoskeleton motions of the exoskeleton robot are controlled according to the action intentions.
In the method, the corresponding frequency spectrum image is obtained by transforming the active segment of the electromyographic signal, and the frequency spectrum image is used as the feature, so that researchers do not need to carry out feature design, feature combination and other work, and therefore the method can be used for efficiently and accurately extracting the feature of the electromyographic signal and further realizing the subsequent classification and identification of the electromyographic signal.
Drawings
Fig. 1 is a flowchart of an embodiment of a method for constructing a gesture classifier according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a network architecture of a gesture classifier;
FIG. 3 is a flowchart illustrating an embodiment of determining an active segment of an electromyographic signal according to an embodiment of the present invention;
fig. 4 is a flowchart illustrating an embodiment of a control method for an exoskeleton robot according to an embodiment of the present invention;
fig. 5 is a block diagram of an embodiment of a device for constructing a gesture classifier according to an embodiment of the present invention;
fig. 6 is a block diagram of an embodiment of an exoskeleton robot control apparatus according to an embodiment of the present invention;
fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
For the convenience of understanding of the embodiments of the present invention, the following description will be made in terms of specific embodiments with reference to the accompanying drawings, which are not intended to limit the embodiments of the present invention.
Referring to fig. 1, a flowchart of an embodiment of a method for constructing a gesture classifier according to an embodiment of the present invention is provided. As shown in fig. 1, the method comprises the steps of:
step 101, obtaining at least one group of corresponding relations containing gesture actions and electromyographic signals of a user in the process of making the gesture actions.
In the embodiment of the present invention, in order to implement recognition and classification of multiple gesture actions, at least one group of corresponding relationships including the gesture action and an electromyographic signal of a user in the gesture action process may be respectively obtained for each preset gesture action. The preset gesture actions include, but are not limited to: fist making, wrist bending, wrist stretching, palm stretching, internal rotation, external rotation, finger pinching, etc. The electromyographic signals can be multichannel needle electrode electromyographic signals or multichannel surface electrode electromyographic signals.
The following description will be made by taking a gesture as an example, of an implementation manner of obtaining at least one group including a corresponding relationship between the gesture and an electromyographic signal of a user in the process of making the gesture:
in one embodiment, the user may be required to hold a gesture for 5 seconds, and during the gesture, i.e. 5 seconds, the electromyographic signal acquisition module, e.g. an 8-channel arm ring electromyographic sensor, worn on the forearm of the user is controlled to acquire the electromyographic signal of the forearm of the user at a preset acquisition frequency, e.g. every 100 ms. Therefore, each group of corresponding relations comprises a plurality of electromyographic signals, such as 50 electromyographic signals.
Furthermore, the electromyographic signal acquisition module can transmit the acquired electromyographic signals to the electronic equipment in a wired or wireless transmission mode, so that the electronic equipment can obtain at least one group of corresponding relations containing gesture actions and the electromyographic signals of the user when the user makes the gesture actions.
In another embodiment, the user may also be required to make the same gesture action a number of times, such as 5 times. In the process that the user makes a gesture each time, the electromyographic signal acquisition module is controlled to acquire the electromyographic signals of the forearm of the user according to a preset acquisition frequency so as to obtain the corresponding relation between more gesture actions and the electromyographic signals of the user when making the gesture actions. The number of training samples can be increased, the accuracy of the finally trained gesture classifier is improved, and meanwhile the phenomenon of overfitting can be avoided.
Of course, it can be understood that, in order to avoid muscle discomfort of the forearm of the user caused by repeating the same gesture for multiple times, and further cause a large deviation of the collected electromyographic signals, in the application, the user may be required to rest for a period of time, for example, 30 seconds, every time the user completes the gesture.
And 102, respectively transforming the active segments of the electromyographic signals in each group of corresponding relations to obtain a training sample set, wherein each training sample takes the frequency spectrum image obtained by transformation as an input value and takes the gesture type of the gesture action as a label value.
In the embodiment of the present invention, for each group of corresponding relations obtained in step 101, the active segments of the electromyographic signals in the group of corresponding relations are transformed to obtain corresponding spectral images, and then, a training sample can be constructed by using each spectral image, where the training sample uses the spectral image as an input value and uses a gesture as a tag value. The frequency spectrum image can represent the time domain characteristic and the frequency domain characteristic of the electromyographic signal.
In one embodiment, a plurality of training samples may be constructed for each group of the corresponding relationship. Specifically, for each group of corresponding relations, the electromyographic signals in the corresponding relations are grouped based on the acquisition time (namely the acquisition time of the electromyographic signals) to obtain a plurality of signal groups; then, the active segments of the electromyographic signals in each signal group are respectively transformed to obtain the frequency spectrum image corresponding to the signal group. It can be seen that, in this embodiment, a plurality of spectral images can be obtained for each set of corresponding relationships, and a training sample can be constructed based on one spectral image, so that the construction of a plurality of training samples for each set of corresponding relationships can be realized through the above processing. Of course, the label values of the training samples constructed for a set of correspondences are the same.
As an optional implementation manner, the grouping the electromyographic signals in the corresponding relationship based on the acquisition time to obtain a plurality of signal groups includes: and sequencing the electromyographic signals in the corresponding relation according to the sequence of the acquisition time, and then grouping sequencing results according to a set time interval or a set number to obtain a plurality of signal groups. For example, assuming that a group of corresponding relationships includes 50 electromyographic signals, and assuming that the collection frequency is 100Hz (i.e., the electromyographic signals are collected every 10 ms), and the set time interval is 100ms, the electromyographic signals in the group of corresponding relationships may be divided into 5 groups, and each group includes 10 electromyographic signals. Based on this, 5 training samples can be finally constructed for the corresponding relationship of the group.
As another optional implementation manner, the grouping the electromyographic signals in the corresponding relationship based on the acquisition time to obtain a plurality of signal groups includes: and dividing a preset acquisition time period into a plurality of time windows according to a preset window length, wherein an overlapping part is formed between every two adjacent time windows. The above-mentioned acquisition time period means: the time period for the set of correspondences is collected, such as 5 seconds for the example above. For example, the first time window is 0-100 ms, the second time window is 75-175 ms, the third time window is 150-250 ms, and so on. It will be appreciated that in this example, the overlap between each two adjacent time windows is of the same length, 25 ms.
Next, grouping the electromyographic signals in the corresponding relation based on each time window to obtain a plurality of signal groups; wherein, the collection time of the electromyographic signals in the same signal group falls into the same time window, and different signal groups belong to different time windows. For example, assuming that a group of corresponding relationships includes 50 electromyographic signals (for convenience of description, the 50 electromyographic signals are respectively numbered 1 to 50), and assuming that the collection frequency is 100Hz (that is, the electromyographic signals are collected every 10 ms), 7 signal groups can be obtained according to the above description, which are respectively the electromyographic signals numbered 1 to 11, the electromyographic signals numbered 9 to 18, the electromyographic signals numbered 16 to 26, the electromyographic signals numbered 24 to 33, the electromyographic signals numbered 31 to 41, the electromyographic signals numbered 39 to 48, and the electromyographic signals numbered 46 to 50. Based on this, 7 training samples can be finally constructed for the corresponding relationship of the group.
Therefore, compared with the previous implementation mode, the implementation mode can expand the number of the training samples, and the phenomenon that the gesture classifier trained finally is low in generalization capability and overfitting occurs due to the fact that the training samples are few can be avoided.
Optionally, a continuous wavelet transform algorithm is used to transform the active segment of the electromyographic signals in each group of corresponding relations, so as to obtain corresponding spectrum images. In one example, the mother wavelet of the continuous wavelet transform algorithm employs a mexican hat wavelet. The size of the spectrum image obtained by adopting the continuous wavelet transform algorithm is m x n, wherein m is the scale, and n is the number of the electromyographic signals.
And 103, training the convolutional neural network based on the training sample set to obtain a gesture classifier, wherein the gesture classifier takes the frequency spectrum image as an input value and takes the gesture action as an output value.
In the embodiment of the invention, the convolutional neural network model is adopted to realize the gesture classifier in consideration of the advantages of the convolutional neural network in image processing. Based on this, in step 103, the convolutional neural network is trained based on the training sample set to obtain a gesture classifier, where the gesture classifier takes the spectrum image as an input value and takes the gesture action as an output value.
It is understood that the trained gesture classifier can be understood as a functional relationship between an input value and an output value, wherein the output value is affected by the input value, and therefore, the functional relationship between the output value and the input value can be exemplified as follows:
y=f(x)
wherein x represents an input value, i.e. a spectral image, and y represents an output value, i.e. a gesture motion.
In one example, as shown in FIG. 2, the finally trained gesture classifier consists of 4 convolutional layers, 2 fully-connected layers, a Max posing layer, a Dropout layer, and a Softmax layer. The convolution layer performs convolution operation on input data by adopting a convolution kernel with a fixed size to obtain the output of the convolution layer, and the output is used as the input of the next convolution layer. The fully-connected layer then acts as a classifier throughout the convolutional neural network for mapping the learned distributed feature representation to the sample label space. The Max posing layer, namely the pooling layer, plays a role in feature integration. The Dropout layer plays a role in reducing the number of intermediate features, thereby reducing redundancy, further preventing model overfitting and improving the generalization capability of the model. The Softmax layer is used for performing regression operation on the classification result. The training algorithm used to obtain the gesture classifier is not specifically limited in the present invention.
The method for constructing the gesture classifier provided by the invention comprises the steps of obtaining at least one group of corresponding relations containing gesture actions and electromyographic signals of users when the users make gesture actions, respectively transforming the activity sections of the electromyographic signals in each group of corresponding relations to obtain a training sample set, wherein each training sample takes a frequency spectrum image obtained through transformation as an input value, takes the gesture actions as a label value, trains a convolutional neural network based on the training sample set, can obtain a gesture classifier which takes the frequency spectrum image as an input value and takes the gesture actions as an output value, and further can realize the purpose of determining the action intentions of the users by utilizing the electromyographic signals of the users by applying the gesture classifier, so as to control the mechanical structure movement according to the action intentions.
In the method, the corresponding frequency spectrum image is obtained by transforming the active segment of the electromyographic signal, and the frequency spectrum image is used as the feature, so that researchers do not need to carry out feature design, feature combination and other work, and therefore the method can be used for efficiently and accurately extracting the feature of the electromyographic signal so as to realize subsequent classification and identification.
On the basis of the above embodiment, before step 102, the method may further include: preprocessing the electromyographic signals in the corresponding relation of each group, for example, removing 50Hz power interference by using a wave trap, and then performing 20-450 Hz band-pass filtering. This can improve the accuracy of the time-frequency features characterized by the spectral images obtained by subsequent transformations.
On the basis of the above embodiment, before step 102, the method may further include: and aiming at each group of corresponding relations, determining the active segment of the electromyographic signals in the corresponding relations. In an embodiment, as shown in fig. 3, determining the active segment of the electromyographic signal comprises the steps of:
step 301, extracting an envelope signal of the electromyographic signal.
In application, since the amplitude of the signal data of the electromyogram signal is small just before entering the active segment or just after leaving the active segment, it is difficult to distinguish the rest segment from the active segment according to the value of the electromyogram signal. Based on this, as an optional implementation manner, firstly, the electromyographic signals are processed based on the kernel function, and then envelope signals corresponding to the processed electromyographic signals are extracted, so that the envelope signals have larger amplitude changes when the envelope signals just enter the active segment or just leave the active segment, and the accuracy of the active segment detection is improved.
In one example, the electromyographic signals are processed based on a kernel function by the following steps, and then envelope signals corresponding to the processed electromyographic signals are extracted:
step 1.1: a kernel function is initialized.
Where the kernel function may be a data set including a preamble of the myoelectric signal, and the initial state of the kernel function is 0, for example, the kernel function is ke (jk) { j1, j2, j3, …, jn }, then j1, …, jn ═ 0 in the kernel function in the initial state, where j1, …, jn represents a data point in the kernel function, and data in the kernel function is gradually enriched in the signal processing process.
Step 1.2: and (4) leading the electromyographic signals into the kernel function one by one according to the sequence of the electromyographic signal acquisition time from first to last.
For example, the sequence of the electromyographic signals after the electromyographic signals are sorted according to the acquisition time of the acquired electromyographic signals is as follows: s1, s2, … si, si +1, …, the myoelectric signals are introduced into the kernel function in the above order.
Step 1.3: and updating the kernel function after each electromyographic signal is introduced, and calculating the unit equidistant integral of the updated kernel function by adopting a trapezoidal method.
For example, the myoelectric signal s1 is introduced into the kernel function, the kernel function is updated to kernel { j2, …, jn, s1}, j2, …, jn ═ 0, s2 is introduced into the kernel function, and the kernel function is updated to kernel ═ { j3, …, jn, s1, s2}, j3, j4, …, jn ═ 0, and so on.
After introducing s1, the unit equidistant integral envelopment of kernel ═ j2, …, jn, s1} is calculated by the trapezoidal method according to the following formula:
envelopeSignal=sum{j2,…,jn,s1}÷2;
after si +1 is introduced, the unit equidistant integral enveloppesignal of kernel ═ j3, …, jn, s1, s2 is calculated by the trapezoidal method according to the following formula:
envelopeSignal=sum{j3,…,jn,s1,s2}÷2;
step 1.4: and the obtained unit equidistant integral is used as an envelope signal corresponding to the electromyographic signal of the imported kernel function.
Namely, the unit equidistant integral corresponding to the electromyographic signal s1 is used as the envelope signal y1 corresponding to the electromyographic signal s1, the unit equidistant integral corresponding to the electromyographic signal s2 is used as the envelope signal y2 corresponding to the electromyographic signal si +1, and so on, the envelope signals {1, y2, …, yi, yi +1, … } are calculated by the electromyographic signals {1, s2, …, si, si +1, …).
Step 302, comparing the extracted envelope signals with a preset threshold value in sequence, and when a first envelope signal larger than the preset threshold value is determined from the extracted envelope signals for the first time, determining the first envelope signal as a start signal of the active segment; and when the second envelope signal after the first envelope signal is determined not to be larger than the preset threshold value, determining the second envelope signal as the termination point signal of the active segment.
The preset threshold is a preset signal value for judging whether the electromyographic signal is in an active segment, when the electromyographic signal is greater than the preset threshold, the electromyographic signal is in the active segment, and when the electromyographic signal is not greater than the preset threshold, the electromyographic signal is in a resting segment.
Based on this, the extracted envelope signals may be sequentially compared with a preset threshold, and when an envelope signal (hereinafter, referred to as a first envelope signal) greater than the preset threshold is determined from the extracted envelope signals for the first time, the first envelope signal is determined as a start signal of the active segment; when it is determined that one envelope signal (hereinafter, referred to as a second envelope signal) following the first envelope signal is not greater than the preset threshold value, the second envelope signal is determined as the termination point signal of the active segment.
Optionally, a specific numerical value of the preset threshold may be set empirically in the application, and the preset threshold may be adjusted according to actual requirements, so as to improve the accuracy of determining the active segment.
Step 303, determining a signal segment from the starting point signal to the ending point signal in the electromyographic signals as an active segment.
It should be noted that the above method is only one example of determining an active segment in an electromyographic signal, and in practical applications, other methods may also be used to determine an active segment in an electromyographic signal, such as a short-time fourier method, a self-organizing artificial neural network method, a moving average method, and the like, which is not limited in this respect.
In practical application, the gesture classifier trained by the construction method of the gesture classifier can be applied to the field of controlling the motion of a mechanical structure based on an electromyographic signal. In one example, the exoskeleton robot is a robot controlled by an electromyogram signal, and the exoskeleton robot recognizes the action intention of the user based on the action lead of the electromyogram signal so as to control the movement of the exoskeleton robot. Based on the gesture classifier, the embodiment of the invention further provides an exoskeleton robot control method based on the gesture classifier.
Referring to fig. 4, a flowchart of an embodiment of a control method for an exoskeleton robot according to an embodiment of the present invention is provided. As shown in fig. 4, the method comprises the steps of:
step 401, collecting the electromyographic signals through an electromyographic signal collecting module.
In application, an exoskeleton robot is generally provided with an electromyographic signal acquisition module (such as an electromyographic sensor), a control module (such as a CPU controller), and an exoskeleton (i.e. a mechanical structure, such as a mechanical arm, a manipulator, a mechanical leg, etc.).
The electromyographic signal acquisition module acquires an electromyographic signal through an electrode arranged on the skin of a user.
Step 402, converting the active segment of the electromyographic signal to obtain a corresponding frequency spectrum image.
In application, the control module determines whether an action potential is generated according to the electromyographic signals collected by the electromyographic signal collecting module (i.e. whether an active segment exists in the electromyographic signals is detected), and determines the active segment of the electromyographic signals when the action potential is generated, and transforms the active segment of the electromyographic signals to obtain corresponding frequency spectrum images.
For how to determine the active segment of the electromyographic signal and transform the active segment of the electromyographic signal to obtain the corresponding spectrum image, reference may be made to the above description, and details are not described here.
And 403, inputting the frequency spectrum image into a gesture classifier to obtain a gesture action corresponding to the electromyographic signal.
As can be seen from the above description, the gesture classifier takes the spectrum image as input and takes the gesture action as output, so that the control module may input the spectrum image obtained by transforming in step 402 to the gesture classifier to obtain the gesture action corresponding to the myoelectric signal.
Step 404, generating control instructions for controlling the exoskeleton robot based on the gesture motion, and controlling exoskeleton motions of the exoskeleton robot based on the control instructions.
In one embodiment, the corresponding relationship between the gesture actions and the control commands is preset, the corresponding relationship is stored in a local memory of the exoskeleton robot or a data node which can be accessed by a control module of the exoskeleton robot in advance, so that the control module can determine the corresponding control commands in a table look-up manner after determining the gesture actions corresponding to the myoelectric signals, and the exoskeleton motions of the exoskeleton robot are controlled based on the control commands.
According to the exoskeleton robot control method provided by the embodiment of the invention, the electromyographic signals are collected through the electromyographic signal collection module, the moving segments of the electromyographic signals are converted to obtain the corresponding frequency spectrum images, the frequency spectrum images are input to the gesture classifier to obtain the control instructions for controlling the exoskeleton robot by the gesture actions corresponding to the electromyographic signals based on the gesture actions generated by the gesture actions, and the exoskeleton motions of the exoskeleton robot are controlled based on the control instructions, so that the action intentions of users can be determined according to the electromyographic signals of the users, and the exoskeleton motions of the exoskeleton robot are controlled according to the action intentions.
Corresponding to the embodiment of the construction method of the gesture classifier, the invention also provides an embodiment of a construction device of the gesture classifier.
Referring to fig. 5, a block diagram of an embodiment of a device for constructing a gesture classifier is provided for the embodiment of the present invention. As shown in fig. 5, the apparatus includes:
the signal acquisition module 51 is configured to obtain a corresponding relationship between at least one group of myoelectric signals including a gesture motion and an electromyographic signal of a user when the user performs the gesture motion;
a signal transformation module 52, configured to transform the active segments of the electromyographic signals in each group of the corresponding relationships to obtain a training sample set, where each training sample takes a frequency spectrum image obtained through transformation as an input value, and takes the gesture action as a tag value;
and the model training module 53 is configured to train the convolutional neural network based on the training sample set to obtain a gesture classifier, where the gesture classifier takes the spectral image as an input value and takes a gesture action as an output value.
In a possible embodiment, the signal transformation module 52 transforms the active segments of the electromyographic signals in each set of the corresponding relations, respectively, and includes:
and respectively transforming the active segments of the electromyographic signals in each group of corresponding relations by adopting a continuous wavelet transform algorithm to obtain corresponding frequency spectrum images.
In a possible embodiment, the signal transformation module 52 transforms the active segments of the electromyographic signals in each set of the corresponding relations, respectively, and includes:
for each group of corresponding relations, grouping myoelectric signals in the corresponding relations based on acquisition time to obtain a plurality of signal groups;
and respectively converting the active segment of the electromyographic signals in each signal group to obtain corresponding frequency spectrum images.
In a possible embodiment, the signal transformation module 52 groups the electromyographic signals in the correspondence relationship based on the acquisition time to obtain a plurality of signal groups, including:
dividing a preset acquisition time period into a plurality of time windows according to a preset window length; each two adjacent time windows have an overlapping part;
grouping the electromyographic signals in the corresponding relation based on each time window to obtain a plurality of signal groups; the acquisition time of the electromyographic signals in the same signal group falls into the same time window, and different signal groups belong to different time windows.
In a possible embodiment, the obtaining module 51 obtains at least one group of correspondence between a gesture motion and an electromyographic signal of the user when making the gesture motion, including:
and acquiring at least one group of corresponding relations, acquired according to a preset acquisition frequency, of the electromyographic signal acquisition module, wherein the corresponding relations comprise the gesture actions and the electromyographic signals of the user when the user makes the gesture actions, in the process that the user makes the gesture actions.
In a possible embodiment, the device further comprises (not shown in fig. 5):
an envelope signal extraction module, configured to extract an envelope signal of the electromyographic signal for each electromyographic signal in each group of the corresponding relationship;
the comparison module is used for comparing the extracted envelope signals with a preset threshold value in sequence, and when a first envelope signal which is larger than the preset threshold value is determined from the extracted envelope signals for the first time, the first envelope signal is determined as a starting point signal of the active segment; when determining that a second envelope signal after the first envelope signal is not larger than the preset threshold value, determining the second envelope signal as a termination point signal of an active segment;
and the activity section determining module is used for determining a signal section from the starting point signal to the end point signal in the electromyographic signals as an activity section.
Corresponding to the embodiments of the exoskeleton robot control method, the invention also provides embodiments of the exoskeleton robot control device.
Referring to fig. 6, a block diagram of an embodiment of an exoskeleton robot control apparatus according to an embodiment of the present invention is provided. As shown in fig. 6, the apparatus includes:
the signal acquisition module 61 is used for acquiring an electromyographic signal through the electromyographic signal acquisition module;
the signal conversion module 62 is configured to convert the active segment of the electromyographic signal to obtain a corresponding frequency spectrum image;
the gesture recognition module 63 is configured to input the frequency spectrum image to the gesture classifier to obtain a gesture action corresponding to the myoelectric signal;
a mechanical control module 64 for generating control instructions for controlling the exoskeleton robot based on the gesture motion and controlling exoskeleton motions of the exoskeleton robot based on the control instructions.
In a possible implementation manner, the signal transformation module 62 transforms an active segment of the electromyographic signal to obtain a corresponding spectrum image, including:
and transforming the active segment of the electromyographic signals by adopting a continuous wavelet transform algorithm or a Fourier transform algorithm to obtain a corresponding frequency spectrum image.
Fig. 7 is a schematic structural diagram of an electronic device according to an embodiment of the present invention, where the electronic device 700 shown in fig. 7 includes: at least one processor 701, memory 702, at least one network interface 704, and other user interfaces 703. The various components in the electronic device 700 are coupled together by a bus system 705. It is understood that the bus system 705 is used to enable communications among the components. The bus system 705 includes a power bus, a control bus, and a status signal bus in addition to a data bus. But for clarity of illustration the various busses are labeled in figure 7 as the bus system 705.
The user interface 703 may include, among other things, a display, a keyboard, or a pointing device (e.g., a mouse, trackball, touch pad, or touch screen, among others.
It is to be understood that the memory 702 in embodiments of the present invention may be either volatile memory or nonvolatile memory, or may include both volatile and nonvolatile memory. The non-volatile memory may be a Read-only memory (ROM), a programmable Read-only memory (PROM), an erasable programmable Read-only memory (erasabprom, EPROM), an electrically erasable programmable Read-only memory (EEPROM), or a flash memory. The volatile memory may be a Random Access Memory (RAM) which functions as an external cache. By way of example, but not limitation, many forms of RAM are available, such as static random access memory (staticaram, SRAM), dynamic random access memory (dynamic RAM, DRAM), synchronous dynamic random access memory (syncronous DRAM, SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced synchronous SDRAM (ESDRAM), synchronous link SDRAM (synchlink DRAM, SLDRAM), and direct memory bus RAM (DRRAM). The memory 702 described herein is intended to comprise, without being limited to, these and any other suitable types of memory.
In some embodiments, memory 702 stores the following elements, executable units or data structures, or a subset thereof, or an expanded set thereof: an operating system 7021 and application programs 7022.
The operating system 7021 includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, for implementing various basic services and processing hardware-based tasks. The application 7022 includes various applications, such as a media player (MediaPlayer), a Browser (Browser), and the like, for implementing various application services. Programs that implement methods in accordance with embodiments of the present invention can be included within application program 7022.
In the embodiment of the present invention, the processor 701 is configured to execute the method steps provided by the method embodiments by calling a program or an instruction stored in the memory 702, specifically, a program or an instruction stored in the application 7022, for example, and includes:
obtaining at least one group of corresponding relations containing gesture actions and electromyographic signals of a user in the gesture action process;
respectively converting the active sections of the electromyographic signals in each group of corresponding relations to obtain a training sample set, wherein each training sample takes a frequency spectrum image obtained by conversion as an input value and takes the gesture as a label value;
and training the convolutional neural network based on the training sample set to obtain a gesture classifier, wherein the gesture classifier takes the frequency spectrum image as an input value and takes the gesture action as an output value.
Alternatively, the first and second electrodes may be,
collecting an electromyographic signal through an electromyographic signal collecting module;
converting the active segment of the electromyographic signal to obtain a corresponding frequency spectrum image;
inputting the frequency spectrum image to the gesture classifier to obtain gesture actions corresponding to the electromyographic signals;
generating control instructions to control the exoskeleton robot based on the gesture actions, and controlling exoskeleton motions of the exoskeleton robot based on the control instructions.
The method disclosed in the above embodiments of the present invention may be applied to the processor 701, or implemented by the processor 701. The processor 701 may be an integrated circuit chip having signal processing capabilities. In implementation, the steps of the above method may be performed by integrated logic circuits of hardware or instructions in the form of software in the processor 701. The processor 701 may be a general-purpose processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf programmable gate array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like. The steps of the method disclosed in connection with the embodiments of the present invention may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software elements in the decoding processor. The software cells may be located in ram, flash, rom, prom, eprom, registers, etc. storage media as is well known in the art. The storage medium is located in the memory 702, and the processor 701 reads the information in the memory 702 and performs the steps of the above method in combination with the hardware thereof.
It is to be understood that the embodiments described herein may be implemented in hardware, software, firmware, middleware, microcode, or any combination thereof. For a hardware implementation, the processing units may be implemented within one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), general purpose processors, controllers, micro-controllers, microprocessors, other electronic units designed to perform the functions described herein, or a combination thereof.
For a software implementation, the techniques described herein may be implemented by means of units performing the functions described herein. The software codes may be stored in a memory and executed by a processor. The memory may be implemented within the processor or external to the processor.
The electronic device provided in this embodiment may be the electronic device shown in fig. 7, and may perform all the steps of the method for constructing the gesture classifier shown in fig. 1 and 3, so as to achieve the technical effect of the method for constructing the gesture classifier shown in fig. 1 and 3, or perform all the steps of the method for controlling the exoskeleton robot shown in fig. 4, so as to achieve the technical effect of the method for controlling the exoskeleton robot shown in fig. 4, specifically please refer to the relevant descriptions of fig. 1, fig. 3 to fig. 4, which are not described herein for brevity.
The embodiment of the invention also provides a storage medium (computer readable storage medium). The storage medium herein stores one or more programs. Among others, the storage medium may include volatile memory, such as random access memory; the memory may also include non-volatile memory, such as read-only memory, flash memory, a hard disk, or a solid state disk; the memory may also comprise a combination of memories of the kind described above.
When the one or more programs in the storage medium are executable by the one or more processors, the method for constructing the gesture classifier or the exoskeleton robot control method executed on the electronic device side is implemented.
The processor is used for executing a construction program of the gesture classifier or an exoskeleton robot control program stored in the memory so as to realize the following steps of the construction method of the gesture classifier executed on the electronic equipment side:
obtaining at least one group of corresponding relations containing gesture actions and electromyographic signals of a user in the gesture action process;
respectively converting the active sections of the electromyographic signals in each group of corresponding relations to obtain a training sample set, wherein each training sample takes a frequency spectrum image obtained by conversion as an input value and takes the gesture as a label value;
and training the convolutional neural network based on the training sample set to obtain a gesture classifier, wherein the gesture classifier takes the frequency spectrum image as an input value and takes the gesture action as an output value.
Or the following steps of the exoskeleton robot control method executed on the electronic device side are realized:
collecting an electromyographic signal through an electromyographic signal collecting module;
converting the active segment of the electromyographic signal to obtain a corresponding frequency spectrum image;
inputting the frequency spectrum image to the gesture classifier to obtain gesture actions corresponding to the electromyographic signals;
generating control instructions to control the exoskeleton robot based on the gesture actions, and controlling exoskeleton motions of the exoskeleton robot based on the control instructions.
Those of skill would further appreciate that the various illustrative components and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative components and steps have been described above generally in terms of their functionality in order to clearly illustrate this interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied in hardware, in a software module executed by a processor, or in a combination of the two. A software module may reside in Random Access Memory (RAM), memory, Read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The above-mentioned embodiments are intended to illustrate the objects, technical solutions and advantages of the present invention in further detail, and it should be understood that the above-mentioned embodiments are merely exemplary embodiments of the present invention, and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (12)

1. A construction method of a gesture classifier is characterized by comprising the following steps:
obtaining at least one group of corresponding relations containing gesture actions and electromyographic signals of a user in the gesture action process;
respectively transforming the active segments of the electromyographic signals in each group of corresponding relations to obtain a training sample set, wherein each training sample takes the frequency spectrum image obtained by transformation as an input value and takes the gesture action as a label value;
and training the convolutional neural network based on the training sample set to obtain a gesture classifier, wherein the gesture classifier takes the frequency spectrum image as an input value and takes the gesture action as an output value.
2. The method according to claim 1, wherein the transforming the active segments of the electromyographic signals in each of the sets of correspondences comprises:
and respectively transforming the active segments of the electromyographic signals in each group of corresponding relations by adopting a continuous wavelet transform algorithm to obtain corresponding frequency spectrum images.
3. The method according to claim 1, wherein the transforming the active segments of the electromyographic signals in each of the sets of correspondences comprises:
for each group of corresponding relations, grouping the electromyographic signals in the corresponding relations based on the acquisition time to obtain a plurality of signal groups;
and respectively converting the active sections of the electromyographic signals in each signal group to obtain corresponding frequency spectrum images.
4. The method according to claim 3, wherein the grouping electromyographic signals in the correspondence based on acquisition time to obtain a plurality of signal groups comprises:
dividing a preset acquisition time period into a plurality of time windows according to a preset window length; every two adjacent time windows have an overlapping part;
grouping the electromyographic signals in the corresponding relation based on each time window to obtain a plurality of signal groups; the acquisition time of the electromyographic signals in the same signal group falls into the same time window, and different signal groups belong to different time windows.
5. The method according to claim 1, wherein the obtaining at least one group of corresponding relations containing gesture actions and electromyographic signals of the user in the process of making the gesture actions comprises:
and aiming at each preset gesture action, acquiring at least one group of corresponding relations, acquired by an electromyographic signal acquisition module according to a preset acquisition frequency, between the gesture action and the electromyographic signal of the user when the user makes the gesture action in the process that the user makes the gesture action.
6. The method according to claim 1, wherein before said separately transforming the active segments of electromyographic signals in each of said sets of correspondences, the method further comprises:
extracting an envelope signal of the electromyographic signals aiming at each electromyographic signal in each group of corresponding relations;
comparing the extracted envelope signals with a preset threshold value in sequence, and determining a first envelope signal larger than the preset threshold value as a starting point signal of an active segment when the first envelope signal is determined from the extracted envelope signals for the first time;
when determining that a second envelope signal after the first envelope signal is not larger than the preset threshold value, determining the second envelope signal as a termination point signal of an active segment;
and determining a signal section from the starting point signal to the ending point signal in the electromyographic signals as an active section.
7. A method for controlling an exoskeleton robot based on a gesture classifier as claimed in any one of claims 1 to 6, the method comprising:
collecting an electromyographic signal through an electromyographic signal collecting module;
converting the active segment of the electromyographic signal to obtain a corresponding frequency spectrum image;
inputting the frequency spectrum image to the gesture classifier to obtain a gesture action corresponding to the electromyographic signal;
generating control instructions to control the exoskeleton robot based on the gesture actions, and controlling exoskeleton motions of the exoskeleton robot based on the control instructions.
8. The method according to claim 7, wherein transforming the active segment of the electromyographic signal to obtain a corresponding spectral image comprises:
and transforming the active segment of the electromyographic signals by adopting a continuous wavelet transform algorithm or a Fourier transform algorithm to obtain a corresponding frequency spectrum image.
9. An apparatus for constructing a gesture classifier, the apparatus comprising:
the signal acquisition module is used for acquiring the corresponding relation between at least one group of gesture actions and electromyographic signals of a user in the gesture action process;
the signal conversion module is used for respectively converting the active sections of the electromyographic signals in each group of corresponding relations to obtain a training sample set, wherein each training sample takes the frequency spectrum image obtained by conversion as an input value and takes the gesture action as a label value;
and the model training module is used for training the convolutional neural network based on the training sample set to obtain a gesture classifier, and the gesture classifier takes the frequency spectrum image as an input value and takes the gesture action as an output value.
10. An exoskeleton robot control apparatus, the apparatus comprising:
the signal acquisition module is used for acquiring the electromyographic signals through the electromyographic signal acquisition module;
the signal conversion module is used for converting the active segment of the electromyographic signal to obtain a corresponding frequency spectrum image;
the gesture recognition module is used for inputting the frequency spectrum image to the gesture classifier to obtain a gesture action corresponding to the electromyographic signal;
a control module to generate control instructions to control the exoskeleton robot based on the gesture motion and to control exoskeleton motions of the exoskeleton robot based on the control instructions.
11. An electronic device, comprising: a processor and a memory, the processor being configured to execute a construction program of the gesture classifier or an exoskeleton robot control program stored in the memory to implement the construction method of the gesture classifier according to any one of claims 1 to 6 or the exoskeleton robot control method according to any one of claims 8 to 7.
12. A storage medium storing one or more programs executable by one or more processors to implement the method of constructing a gesture classifier of any one of claims 1 to 6 or the method of controlling an exoskeleton robot of any one of claims 8 to 7.
CN202010799377.3A 2020-08-10 2020-08-10 Construction method of gesture classifier, exoskeleton robot control method and device Pending CN112123332A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010799377.3A CN112123332A (en) 2020-08-10 2020-08-10 Construction method of gesture classifier, exoskeleton robot control method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010799377.3A CN112123332A (en) 2020-08-10 2020-08-10 Construction method of gesture classifier, exoskeleton robot control method and device

Publications (1)

Publication Number Publication Date
CN112123332A true CN112123332A (en) 2020-12-25

Family

ID=73851617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010799377.3A Pending CN112123332A (en) 2020-08-10 2020-08-10 Construction method of gesture classifier, exoskeleton robot control method and device

Country Status (1)

Country Link
CN (1) CN112123332A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688802A (en) * 2021-10-22 2021-11-23 季华实验室 Gesture recognition method, device and equipment based on electromyographic signals and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105446484A (en) * 2015-11-19 2016-03-30 浙江大学 Electromyographic signal gesture recognition method based on hidden markov model
US20170220923A1 (en) * 2016-02-02 2017-08-03 Samsung Electronics Co., Ltd. Gesture classification apparatus and method using emg signal
CN110420025A (en) * 2019-09-03 2019-11-08 北京海益同展信息科技有限公司 Surface electromyogram signal processing method, device and wearable device
CN110811633A (en) * 2019-11-06 2020-02-21 中国科学院自动化研究所 Identity recognition method, system and device based on electromyographic signals
CN111103976A (en) * 2019-12-05 2020-05-05 深圳职业技术学院 Gesture recognition method and device and electronic equipment
CN111401166A (en) * 2020-03-06 2020-07-10 中国科学技术大学 Robust gesture recognition method based on electromyographic information decoding
CN111399640A (en) * 2020-03-05 2020-07-10 南开大学 Multi-mode man-machine interaction control method for flexible arm
CN111387978A (en) * 2020-03-02 2020-07-10 北京海益同展信息科技有限公司 Method, device, equipment and medium for detecting action section of surface electromyogram signal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105446484A (en) * 2015-11-19 2016-03-30 浙江大学 Electromyographic signal gesture recognition method based on hidden markov model
US20170220923A1 (en) * 2016-02-02 2017-08-03 Samsung Electronics Co., Ltd. Gesture classification apparatus and method using emg signal
CN110420025A (en) * 2019-09-03 2019-11-08 北京海益同展信息科技有限公司 Surface electromyogram signal processing method, device and wearable device
CN110811633A (en) * 2019-11-06 2020-02-21 中国科学院自动化研究所 Identity recognition method, system and device based on electromyographic signals
CN111103976A (en) * 2019-12-05 2020-05-05 深圳职业技术学院 Gesture recognition method and device and electronic equipment
CN111387978A (en) * 2020-03-02 2020-07-10 北京海益同展信息科技有限公司 Method, device, equipment and medium for detecting action section of surface electromyogram signal
CN111399640A (en) * 2020-03-05 2020-07-10 南开大学 Multi-mode man-machine interaction control method for flexible arm
CN111401166A (en) * 2020-03-06 2020-07-10 中国科学技术大学 Robust gesture recognition method based on electromyographic information decoding

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
韩九强等: "《数字图像处理基于XAVIS组态软件》", 31 May 2018, 西安交通大学出版社 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113688802A (en) * 2021-10-22 2021-11-23 季华实验室 Gesture recognition method, device and equipment based on electromyographic signals and storage medium

Similar Documents

Publication Publication Date Title
Yang et al. Dynamic gesture recognition using surface EMG signals based on multi-stream residual network
WO2021143353A1 (en) Gesture information processing method and apparatus, electronic device, and storage medium
Miften et al. A new framework for classification of multi-category hand grasps using EMG signals
Wu et al. Improved high-density myoelectric pattern recognition control against electrode shift using data augmentation and dilated convolutional neural network
WO2021052045A1 (en) Body movement recognition method and apparatus, computer device and storage medium
WO2022027822A1 (en) Electromyographic signal-based intelligent gesture action generation method
Gautam et al. Locomo-net: a low-complex deep learning framework for sEMG-based hand movement recognition for prosthetic control
He et al. Spatial information enhances myoelectric control performance with only two channels
Tosin et al. sEMG feature selection and classification using SVM-RFE
Wang et al. A novel approach to detecting muscle fatigue based on sEMG by using neural architecture search framework
Kisa et al. EMG based hand gesture classification using empirical mode decomposition time-series and deep learning
Meng et al. User-tailored hand gesture recognition system for wearable prosthesis and armband based on surface electromyogram
Onay et al. Phasor represented EMG feature extraction against varying contraction level of prosthetic control
Fang et al. Modelling EMG driven wrist movements using a bio-inspired neural network
Tyacke et al. Hand gesture recognition via transient sEMG using transfer learning of dilated efficient CapsNet: towards generalization for neurorobotics
Chen et al. A review of myoelectric control for prosthetic hand manipulation
CN111401166A (en) Robust gesture recognition method based on electromyographic information decoding
CN112123332A (en) Construction method of gesture classifier, exoskeleton robot control method and device
Singh et al. Leveraging deep feature learning for wearable sensors based handwritten character recognition
Wang et al. Deep convolutional neural network for decoding EMG for human computer interaction
CN111923048A (en) Electromyographic signal classification and exoskeleton robot control method and device
Chen et al. SEMG-based gesture recognition using GRU with strong robustness against forearm posture
Hua et al. Recognition of Electromyographic Signal Time Series on Daily Hand Motions Based on Long Short-Term Memory Network.
Zhou et al. Time–frequency feature transform suite for deep learning-based gesture recognition using sEMG signals
Nia et al. EMG-Based Hand Gestures Classification Using Machine Learning Algorithms

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Beijing Economic and Technological Development Zone, Beijing 100176

Applicant before: BEIJING HAIYI TONGZHAN INFORMATION TECHNOLOGY Co.,Ltd.

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20201225