CN115050104B - Continuous gesture action recognition method based on multichannel surface electromyographic signals - Google Patents

Continuous gesture action recognition method based on multichannel surface electromyographic signals Download PDF

Info

Publication number
CN115050104B
CN115050104B CN202210977640.2A CN202210977640A CN115050104B CN 115050104 B CN115050104 B CN 115050104B CN 202210977640 A CN202210977640 A CN 202210977640A CN 115050104 B CN115050104 B CN 115050104B
Authority
CN
China
Prior art keywords
signal
sequence vector
gesture
signals
upper processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210977640.2A
Other languages
Chinese (zh)
Other versions
CN115050104A (en
Inventor
姜汉钧
李孟辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Weili Innovation Technology Co ltd
Original Assignee
Suzhou Weili Innovation Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Weili Innovation Technology Co ltd filed Critical Suzhou Weili Innovation Technology Co ltd
Priority to CN202210977640.2A priority Critical patent/CN115050104B/en
Publication of CN115050104A publication Critical patent/CN115050104A/en
Application granted granted Critical
Publication of CN115050104B publication Critical patent/CN115050104B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1118Determining activity level
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1121Determining geometric values, e.g. centre of rotation or angular range of movement
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/389Electromyography [EMG]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/24Detecting, measuring or recording bioelectric or biomagnetic signals of the body or parts thereof
    • A61B5/316Modalities, i.e. specific diagnostic methods
    • A61B5/389Electromyography [EMG]
    • A61B5/397Analysis of electromyograms
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • Medical Informatics (AREA)
  • Pathology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Physiology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Fuzzy Systems (AREA)
  • Multimedia (AREA)
  • Software Systems (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • Data Mining & Analysis (AREA)
  • Geometry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Computational Linguistics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a continuous gesture action recognition method based on multichannel surface electromyogram signals, belongs to the field of gesture action recognition, and relates to a man-machine interaction technology. By applying the ICA algorithm, the problem of mutual interference of signal mixing among different channels is solved, so that the training difficulty of the model is further reduced, and the requirement on the scale of a training data set is reduced. By applying a mature monocular vision technology, the automatic marking of the continuously-changing gestures is realized, and the cost and the implementation difficulty of the method are greatly reduced compared with those of the traditional method. Deep learning training is performed by using the sEMG data of the user, so that the requirement on the generalization capability of the model is greatly reduced, and a better training effect is realized by using a smaller training data volume.

Description

Continuous gesture action recognition method based on multichannel surface electromyographic signals
Technical Field
The invention belongs to the field of gesture motion recognition, relates to a man-machine interaction technology, and particularly relates to a continuous gesture motion recognition method based on multichannel surface electromyographic signals.
Background
Surface Electromyographic (sEMG) is a common bioelectric signal that can non-invasively sense and interpret muscle activity of a user. In view of the intuitiveness and the effectiveness of the sEMG in the muscle activity perception, the sEMG has a good application prospect in the field of gesture capture. The collection of sEMG is generally accomplished by the myoelectric arm ring, and the myoelectric arm ring is as a wearable equipment, can obtain real-time stable sEMG signal in daily general suitable calculation scene. The myoelectric arm ring is a parallel data acquisition module formed by an electrode array, is generally worn on the forearm of a user, and can acquire a plurality of channel data simultaneously. When a user performs a specific gesture motion, the myoelectric arm ring recognizes different user gesture motions by acquiring and analyzing a surface myoelectric signal of the forearm of the user in real time.
The current relatively mature gesture capturing technology is mainly divided into the following three technical schemes: magnetic field-based motion capture gloves, vision-based motion capture cameras, myoelectric-based motion capture armrings. The motion capture glove is relatively complex in principle and structure, high in use cost and limited in motion; the gesture capturing system based on vision is complex in working mode, is greatly influenced by light line parts and shielding conditions, and is limited by the size of a use range; the myoelectricity scheme is not influenced by light limitation and shielding, does not need additional equipment, moves relatively flexibly, and has the characteristics of small data acquisition and transmission amount, small time delay and the like.
The input of the existing gesture capture system based on the electromyographic motion capture arm ring is human forearm sEMG signals of a plurality of channels, and the system output has two types: one is to intermittently output the classification number of a limited number of gestures, and the other is to continuously output a space geometric model of an arbitrary hand shape. Obviously, the expression of the gesture is more delicate, the system response is more sensitive, and the whole function is more powerful, so that the gesture has higher application value. However, most of the current sEMG-based gesture capture systems are the former, while the latter is slow in progress, and the main problems facing them are as follows:
1. the existing electromyographic arm ring is limited by hardware and cost technology bottlenecks, the number of channels and the sampling rate are low, the space and time density of sEMG data acquisition are greatly limited, the information input bottleneck of the whole system is caused, and the functions and effects of a subsequent gesture recognition algorithm are greatly limited;
2. on the premise that the sEMG signal sensor can realize high-space-density acquisition, because the skin area of the arm of a human body for recording signals is limited, the space density of the electrode arrangement is high, and different gestures are completed by the cooperation of a plurality of muscles, the electrodes at adjacent positions are often influenced by the electromyographic signals sent by the plurality of muscles around, and the problem that how to separate the electromyographic signals sent by the muscles at different positions and respectively process the signal characteristics of the electromyographic signals is difficult to solve is solved;
3. the pattern recognition framework represented by deep learning needs to collect a large amount of high-quality training data in the training phase of the model. The training data comprises two kinds of data, one is multi-channel sEMG data collected by an electromyographic arm ring, the other is accurate space coordinates of a gesture corresponding to the data as a label (label), and the two kinds of data need to be aligned frame by frame. Therefore, in order to obtain accurate spatial coordinates of the gesture, accurate automatic marking must be performed on the continuously-changed sEMG data, and the automatic marking requires additional auxiliary equipment and corresponding software, so that the cost and technical difficulty are high;
4. the gesture labels required by deep learning training must contain main morphological information capable of covering gestures, the corresponding label dimension is higher due to the complicated and changeable geometric shape of human gestures, and a certain effect can be achieved only by using more complicated models and more huge training data when model training is carried out by using the high-dimension labels;
5. the scale of the training data set directly affects the generalization ability of the model due to the large individual differences between each person's semgs. Because the training data set cannot be large enough to cover sEMG data of all people, and models trained for sEMG data of a few people are almost certainly over-fit, it is almost impossible to train a generic model that performs well for most people.
Generalized gestures include, in addition to fine movements of the hand, some gross movements of the forearm and forearm, such as waving a hand for a particular day and raising a hand for a particular hand. Limited by the collection position of the electromyographic arm ring, the sEMG collected by the arm ring can only identify and predict the former, but the motion information of the latter cannot be restored.
Therefore, the invention provides a continuous gesture motion recognition method based on multi-channel surface electromyographic signals.
Disclosure of Invention
The present invention is directed to solving at least one of the problems of the prior art.
To achieve the above object, an embodiment according to a first aspect of the present invention proposes a continuous gesture motion recognition method based on multi-channel surface electromyography signals, including the following steps:
the microprocessor controls the acquisition chip to continuously acquire sEMG signals and spatial position and inclination angle information generated by the nine-axis accelerometer by using fixed frequency; the acquisition chip feeds back the sEMG signal and the spatial position and inclination angle information to the microprocessor;
the microprocessor carries out digital filtering processing on the sEMG signal to obtain a primary processing signal;
the microprocessor transmits the primary processing signal and the spatial position and inclination angle information to an upper processor through a wireless transmission module;
the upper processor uses an independent variable analysis algorithm to perform unmixing calculation on the primary processing signal to obtain an input sequence vector;
inputting the input sequence vector into a deep learning model to obtain an output sequence vector;
in an upper processor, generating a matrix of s x 3 by the key point coordinates of the output sequence vector; wherein s represents the number of key gesture points;
rotation and correction of the matrix using spatial position and tilt information generated by a nine-axis accelerometer to generate G gest
In the upper processor, G is used gest Generating a gesture graph or a gesture animation by the key point coordinates in the step (2) so as to complete the gesture recognition of continuous actions.
Further, the process of the upper processor performing unmixing calculation on the primary processing signal by using the independent variable analysis algorithm comprises the following steps:
the upper processor sets a time window and a window moving step length, and performs statistical feature extraction on each channel data of the primary processing signal to form an m x n input sequence vector;
where m is the number of channels processing signals at a time, and n is the number of features extracted by a single channel.
Further, the sEMG signal is a 16-channel sEMG signal, and an input sequence vector obtained after the first-time signal processing and signal unmixing calculation is still a 16-channel signal.
Further, the spatial position and tilt information generated by the nine-axis accelerometer contains six elements, which are < x0, y0, z0, α, β, γ >;
the first three are translation amounts of the gestures in the XYZ directions in the three-dimensional space coordinates;
α is the pitch angle; beta is a yaw angle; gamma is the roll angle.
Further, wherein G gest =M gest *R x *R y *R z +<x0,y0,z0>;
And is provided with
Figure DEST_PATH_IMAGE001
Wherein alpha, beta and gamma correspond to a pitch angle, a yaw angle and a roll angle respectively.
Further, the training process of the deep learning model is as follows:
acquiring an input sequence vector and an output sequence vector by combining signal acquisition hardware with an upper processor;
taking the input sequence vector and the output sequence vector as the input and the output of a deep learning model, training the deep learning model, and establishing a mapping relation between the input sequence vector and the output sequence vector;
the deep learning model can be a Recurrent Neural Network (RNN) or a Convolutional Neural Network (CNN).
Furthermore, the signal acquisition hardware comprises a myoelectric arm ring and a video shooting module;
the myoelectric arm ring comprises a microprocessor, an acquisition chip, a wireless transmission module, a nine-axis accelerometer and a plurality of electrode modules;
the microprocessor is an STM32H743 microprocessor, and the acquisition chip is a WLS128 chip; the wireless transmission module is a Bluetooth module;
the number of the electrode modules is 16; each electrode module comprises three metal electrodes which are arranged at equal intervals, and the connection mode of the electrode modules is a double-differential mode.
Further, the video shooting module is used for shooting gestures of the volunteers in real time to obtain video images; sending the obtained video image to an upper processor;
in an upper processor, performing frame-by-frame calculation on the spatial three-dimensional coordinates of the gesture key points of the acquired video image by using a monocular vision algorithm, and obtaining an s x 3 matrix;
where s represents the number of key gesture points.
Compared with the prior art, the invention has the beneficial effects that:
in the aspect of hardware, the surface electromyogram signal acquisition and real-time digital signal filtering are carried out by a high-precision bioelectricity acquisition chip WLS128 and a high-performance STM32H743 microprocessor, 500Hz sampling rate (125Hz to 2000Hz free configuration), 16 channels (8 to 64 channels free configuration) and 24bit wide sEMG signal acquisition can be realized, and data is transmitted in real time by a low-power consumption Bluetooth chip module. According to the invention, the spatial motion information of the arm ring is acquired by the nine-axis gyroscope in real time, so that the motion mode of the whole forearm is captured, the multimodal fusion of sEMG information and spatial position information is realized, and the detailed hand movement and the large arm movement are considered.
By applying the ICA algorithm, the problem of mutual interference of signal mixing among different channels is effectively solved, so that the training difficulty of the model is further reduced, and the requirement on the scale of a training data set is reduced.
By applying a mature monocular vision technology, the automatic marking of the continuously-changing gestures is realized, and the cost and the implementation difficulty of the method are greatly reduced compared with those of the traditional method. Due to the implementation convenience, marking of the sEMG can be performed by using the gesture video data of the user only through one smart phone; the deep learning training by utilizing the sEMG data of the user can greatly reduce the requirement on the generalization capability of the model, thereby realizing better training effect by utilizing smaller training data volume.
The labels of the gestures are subjected to beneficial dimension reduction processing, the gesture forms are represented by fewer dimensions, and meanwhile, detail information is not lost, so that the training difficulty of the model is further reduced, and the requirement on the scale of a training data set is reduced.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments described in the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
FIG. 1 is a flow chart of the present invention.
Detailed Description
The technical solutions of the present invention will be described below clearly and completely in conjunction with the embodiments, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to provide a thorough understanding of embodiments of the invention. One skilled in the relevant art will recognize, however, that the invention may be practiced without one or more of the specific details, or with other methods, components, devices, steps, and so forth. In other instances, well-known methods, devices, implementations or operations have not been shown or described in detail to avoid obscuring aspects of the invention.
According to the method, spatial motion information of an arm ring is acquired in real time through a nine-axis gyroscope, so that the capture of the motion mode of the whole forearm is completed, the multi-mode fusion of sEMG information and spatial position information is realized, and the detailed hand motion and the large arm motion are considered. By applying the ICA algorithm, the problem of mutual interference of signal mixing among different channels is effectively solved, so that the training difficulty of the model is further reduced, and the requirement on the scale of a training data set is reduced. By applying a mature monocular vision technology, the automatic marking of the continuous change gestures is realized, and the cost and the implementation difficulty of the automatic marking device are greatly reduced compared with those of the traditional mode. Due to the implementation convenience, marking of the sEMG can be performed by using the gesture video data of the user only through one smart phone; deep learning training is carried out by utilizing the sEMG data of the user, so that the requirement on the generalization capability of the model can be greatly reduced, and a better training effect is realized by utilizing smaller training data volume. The labels of the gestures are subjected to beneficial dimension reduction processing, the gesture forms are represented by fewer dimensions, and meanwhile, detail information is not lost, so that the training difficulty of the model is further reduced, and the requirement on the scale of a training data set is reduced.
Fig. 1 shows a flow chart of a method for continuous gesture motion recognition based on multi-channel surface electromyography signals. As shown in fig. 1, the continuous gesture motion recognition method based on multi-channel surface electromyogram signals includes two stages, which are divided into the following steps:
the first stage is an off-line model training stage; the off-line model training stage comprises the following steps:
step S101: acquiring an input sequence vector and an output sequence vector by combining signal acquisition hardware with an upper processor;
specifically, the input sequence vector is obtained in the following manner:
the microprocessor controls the acquisition chip to continuously acquire sEMG signals by using fixed frequency; feeding back the obtained sEMG signal to the microprocessor;
the microprocessor carries out digital filtering processing on the sEMG signal to obtain a primary processing signal;
the microprocessor transmits the primary processing signal to the upper processor through the wireless transmission module;
the upper processor uses an independent variable analysis algorithm to perform unmixing calculation on the primary processing signal to obtain an input sequence vector;
the process that the upper processor uses the independent variable analysis algorithm to perform unmixing calculation on the primary processing signal comprises the following steps:
the upper processor sets a time window and a window moving step length, and performs statistical feature extraction on each channel data of the primary processing signal to form an m x n input sequence vector;
wherein m is the channel number of a signal to be processed at one time, and n is the feature number extracted by a single channel;
in a specific embodiment, the signal acquisition hardware comprises a myoelectric arm ring, wherein the myoelectric arm ring comprises but is not limited to a microprocessor, an acquisition chip, a wireless transmission module, a nine-axis accelerometer and a plurality of electrode modules; the microprocessor is an STM32H743 microprocessor, and the acquisition chip is a WLS128 chip; the wireless transmission module is a Bluetooth module; the number of the electrode modules is 16; each electrode module comprises three metal electrodes (copper, nickel and other metals) which are arranged at equal intervals, and the connection mode of the electrode modules is a double differential mode;
it should be noted that each WLS128 chip can acquire 8-channel 24-bit independent sEMG signals at a sampling rate of 125hz to 2000hz, and two WLS128 chips are connected in parallel and can synchronously acquire 16-channel sEMG signals in real time;
surface electromyogram signal acquisition and real-time digital signal filtering are carried out through a high-precision bioelectricity acquisition chip WLS128 and a high-performance STM32H743 microprocessor, and sEMG signal acquisition with 500Hz sampling rate (125Hz to 2000Hz free configuration), 16 channels (8 to 64 channels free configuration) and 24bit width can be realized;
in operation, the STM32H743 microprocessor controls the WLS128 chip to continuously acquire 16 channels of sEMG signals by using an electromyographic arm ring at a sampling rate of 500Hz and to pass 16 channels of sEMG signalsThe street sEMG signal is labeled M orig
Using streaming IIR filter pairs for M32H743 microprocessor orig Carrying out digital filtering processing to remove baseline, high-frequency noise and power frequency interference;
further, the method for realizing digital filtering processing includes respectively passing the signal of each channel through an IIR band-pass filter of 10 to 200Hz and IIR notch filters of 50Hz, 100Hz and 150Hz in sequence to obtain a filtered signal, and marking the filtered signal as M filter
STM32H743 microprocessor controls Bluetooth module to convert M filter Sending to the upper processor;
in the upper processor, M is analyzed using independent variable analysis (ICA) algorithm filter Performing unmixing calculation; and labeling the unmixed signal as M indep
It should be noted that M after unmixing indep Still 16 channels of signal;
in the upper processor, with 100ms as time window and 20ms as window moving step length, for M indep The extracted features are shown in table 1, and the total number of the extracted features is 14, and a vector with the size of 16 × 14 per frame is formed, and the vector with the size of 16 × 14 is marked as M feature
TABLE 1
Feature(s) Description of the invention
Number of zero crossings Number of times data point changes sign
Number of slope sign changes Differential sequence changeNumber of times of sign
Length of wave form Sum of absolute values of differential sequences
Wilson amplitude Number of times that absolute value of differential sequence exceeds standard deviation of absolute value sequence
Mean absolute value Mean value of a sequence of absolute values
Mean square Mean of the squared sequence
Root mean square Square root of mean square
Mean cubic root of Cubic root of the mean of cubic sequences
Logarithmic detection Natural index of mean of logarithmic sequence
Root mean square difference Standard deviation of absolute values of differential sequences
Maximum fractal length Natural logarithm of difference root mean square
Myoelectric pulse percentage Number of times absolute value exceeds standard deviation
Slope of mean absolute value Difference between average absolute values of first half data and second half data
Weighted mean absolute value Weighted average of absolute value sequence
Wherein, since the window moving step is 20ms, M is feature Has a sampling rate of 50Hz.
Specifically, the output sequence vector is obtained in the following manner:
shooting the gestures of the volunteers in real time by using a video shooting module to obtain video images; sending the obtained video image to an upper processor;
wherein the video frame rate of the video image is 50Hz; the video shooting module can be a monocular camera or a smart phone with a camera;
in an upper processor, performing frame-by-frame calculation on the spatial three-dimensional coordinates of the gesture key points of the acquired video image by using a monocular vision algorithm, and obtaining an s x 3 matrix;
wherein s represents the number of key gesture points;
specifically, in the upper processor, the gesture video collected by the camera is calculated frame by frame based on a mature monocular vision algorithm, and each frame of image can obtain the spatial three-dimensional coordinates of 21 gesture key points to obtain a matrix M with the size of 21 x 3 gest
The monocular vision algorithm may be the google open source item MediaPipe.
Converting the matrix into an output sequence vector;
in a specific embodiment, the joint lengths of the gestures are preset as a series of fixed values meeting the geometric proportional relation of the joints of the hand of a human body, the features of the gestures are only represented by included angles among the joints, and the numerical values of the included angles can be easily calculated according to the geometric relation, so that the dimensionality compression of the main features of the gestures is realized.
Wherein, by using the above conversion mode, the matrix M of 21 x 3 is formed gest Conversion to a vector V of length 27 gest It should be noted that since the frame rate of the captured video is 50Hz, V is gest The sampling rate of (2) is also 50Hz;
step S102: taking the input sequence vector and the output sequence vector as the input and the output of a deep learning model, training the deep learning model, and establishing a mapping relation between the input sequence vector and the output sequence vector;
the deep learning model can be a Recurrent Neural Network (RNN) or a Convolutional Neural Network (CNN);
the steps S101 and S102 belong to an offline model training stage, which is mainly to obtain a mapping relationship between an input sequence vector and an output sequence vector;
the second stage is an online continuous gesture recognition stage, wherein the online continuous gesture recognition stage comprises the following steps:
step S103: the microprocessor controls the acquisition chip to continuously acquire sEMG signals and spatial position and inclination angle information generated by the nine-axis accelerometer by using fixed frequency; the microprocessor feeds back the sEMG signal and the spatial position and inclination angle information to the microprocessor;
the microprocessor carries out digital filtering processing on the sEMG signal to obtain a primary processing signal;
the microprocessor transmits the primary processing signal and the spatial position and inclination angle information to an upper processor through a wireless transmission module;
in one embodiment, the same signal acquisition hardware, namely the myoelectric arm ring, is used in the on-line continuous gesture recognition stage;
the STM32H743 microprocessor controls a WLS128 chip, a 16-channel sEMG signal and spatial position and inclination angle information generated by a 9-axis accelerometer are continuously acquired by using an electromyographic arm ring at a sampling rate of 500Hz, and the 16-channel sEMG signal is marked as M orig The spatial position and tilt information generated by the nine-axis accelerometer is labeled L hand
Using streaming IIR filter pairs for M32H743 microprocessor orig Carrying out digital filtering processing to remove baseline, high-frequency noise and power frequency interference;
further, the method for realizing digital filtering processing includes respectively passing the signal of each channel through an IIR band-pass filter of 10 to 200Hz and IIR notch filters of 50Hz, 100Hz and 150Hz in sequence to obtain a filtered signal, and marking the filtered signal as M filter
STM32H743 microprocessor controls Bluetooth module to enable M filter And L hand Sending to an upper processor;
the upper processor uses an independent variable analysis algorithm to perform unmixing calculation on the primary processing signal to obtain an input sequence vector;
the process that the upper processor uses the independent variable analysis algorithm to perform unmixing calculation on the primary processing signal comprises the following steps:
the upper processor sets a time window and a window moving step length, and performs statistical feature extraction on each channel data of a primary processing signal to form an m x n input sequence vector;
consistent with the off-line model training stage, wherein m is the channel number of signals processed at one time, and n is the feature number extracted by a single channel;
in one embodiment, in the upper processor, M is analyzed using independent variable analysis (ICA) algorithm filter Performing unmixing calculation; and labeling the unmixed signal as M indep
It should be noted that M after unmixing indep Still 16 channels of signal;
in the upper processor, with 100ms as time window and 20ms as window moving step length, for M indep The statistical feature extraction is carried out on each channel data to form vectors with the size of 16 x 14 per frame, and the vectors with the size of 16 x 14 are marked as M feature
Step S104: inputting the input sequence vector into a deep learning model to obtain an output sequence vector;
it should be noted that the outputThe sequence vector is a 27-dimensional vector V gest
Step S105: in an upper processor, generating a matrix of s x 3 by the key point coordinates of the output sequence vector; wherein s represents the number of key gesture points; wherein the generated matrix is labeled M gest
Spatial position and tilt information L generated using a nine-axis accelerometer hand To M gest Rotating and correcting to generate G gest
Wherein L is hand Is a vector comprising six elements respectively
<x0,y0,z0,α,β,γ>
The first three are translation amounts of the gestures in the XYZ directions in the three-dimensional space coordinates;
α is rotation about the X axis, also called pitch angle;
β is the rotation about the Y axis, also called the yaw angle;
γ is the rotation about the Z axis, also called the roll angle.
Then G is gest =M gest *R x *R y *R z +<x0,y0,z0>,
Wherein:
Figure 555676DEST_PATH_IMAGE002
wherein alpha, beta and gamma correspond to a pitch angle, a yaw angle and a roll angle respectively;
in the upper processor, G is used gest Generating a gesture graph or a gesture animation by the key point coordinates in the step (2) so as to complete the gesture recognition of continuous actions.
Although the present invention has been described in detail with reference to the preferred embodiments, it will be understood by those skilled in the art that various changes may be made and equivalents may be substituted for elements thereof without departing from the spirit and scope of the present invention.

Claims (8)

1. A continuous gesture action recognition method based on multichannel surface electromyogram signals is characterized by comprising the following steps:
the microprocessor controls the acquisition chip to continuously acquire sEMG signals and spatial position and inclination angle information generated by the nine-axis accelerometer by using fixed frequency; the acquisition chip feeds back the sEMG signal and the spatial position and inclination angle information to the microprocessor;
the microprocessor carries out digital filtering processing on the sEMG signal to obtain a primary processing signal;
the microprocessor transmits the primary processing signal and the spatial position and inclination angle information to an upper processor through a wireless transmission module;
the upper processor uses an independent variable analysis algorithm to perform unmixing calculation on the primary processing signal to obtain an input sequence vector;
inputting the input sequence vector into a deep learning model to obtain an output sequence vector;
in an upper processor, generating a matrix of s x 3 by the key point coordinates of the output sequence vector; wherein s represents the number of key gesture points;
rotation and correction of the matrix using spatial position and tilt information generated by a nine-axis accelerometer to generate G gest
In the upper processor, G is used gest Generating a gesture graph or a gesture animation by the key point coordinates in the step (2) so as to complete gesture recognition of continuous actions.
2. The method for continuous gesture motion recognition based on multi-channel surface electromyogram signals according to claim 1, wherein the process of the upper processor performing unmixing calculation on the primary processed signals by using an independent variable analysis algorithm comprises:
the upper processor sets a time window and a window moving step length, and performs statistical feature extraction on each channel data of a primary processing signal to form an m x n input sequence vector;
where m is the number of channels processing signals at a time, and n is the number of features extracted by a single channel.
3. The method for continuous gesture motion recognition based on multi-channel surface electromyography signals according to claim 1, wherein the sEMG signals are 16-channel sEMG signals, and the input sequence vector obtained after one-time processed signal unmixing calculation is still 16-channel signals.
4. The method for continuous gesture motion recognition based on multi-channel surface electromyography signals according to claim 1, wherein the spatial position and tilt information generated by the nine-axis accelerometer comprises six elements, respectively<x0,y0,z0,
Figure 179974DEST_PATH_IMAGE001
,
Figure 628273DEST_PATH_IMAGE002
,
Figure 939168DEST_PATH_IMAGE003
>;
The first three are translation amounts of the gestures in the XYZ directions in the three-dimensional space coordinates;
Figure 599957DEST_PATH_IMAGE001
is the pitch angle;
Figure 945487DEST_PATH_IMAGE002
is a yaw angle;
Figure 299108DEST_PATH_IMAGE003
is the roll angle.
5. The continuous gesture motion recognition method based on multi-channel surface electromyography signals of claim 4, wherein G is gest =M gest *R x *R y *R z +<x0,y0,z0>;
And is
Figure 831721DEST_PATH_IMAGE004
6. The continuous gesture motion recognition method based on the multi-channel surface electromyogram signal of claim 1, wherein the deep learning model is trained by the following steps:
acquiring an input sequence vector and an output sequence vector by combining signal acquisition hardware with an upper processor;
taking the input sequence vector and the output sequence vector as the input and the output of a deep learning model, training the deep learning model, and establishing a mapping relation between the input sequence vector and the output sequence vector;
the deep learning model is a Recurrent Neural Network (RNN) or a Convolutional Neural Network (CNN).
7. The method for recognizing the continuous gesture action based on the multichannel surface electromyogram signal according to claim 6, wherein the signal acquisition hardware comprises an electromyogram arm ring and a video shooting module;
the myoelectric arm ring comprises a microprocessor, an acquisition chip, a wireless transmission module, a nine-axis accelerometer and a plurality of electrode modules;
the microprocessor is an STM32H743 microprocessor, and the acquisition chip is a WLS128 chip; the wireless transmission module is a Bluetooth module;
the number of the electrode modules is 16; each electrode module comprises three metal electrodes which are arranged at equal intervals, and the connection mode of the electrode modules is a double-difference mode.
8. The method for recognizing the continuous gesture action based on the multichannel surface electromyogram signal according to claim 7, wherein the video shooting module is used for shooting the gesture of the volunteer in real time to obtain a video image; sending the obtained video image to an upper processor;
in an upper processor, performing frame-by-frame calculation on the spatial three-dimensional coordinates of the gesture key points of the acquired video image by using a monocular vision algorithm, and obtaining an s x 3 matrix;
where s represents the number of key gesture points.
CN202210977640.2A 2022-08-16 2022-08-16 Continuous gesture action recognition method based on multichannel surface electromyographic signals Active CN115050104B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210977640.2A CN115050104B (en) 2022-08-16 2022-08-16 Continuous gesture action recognition method based on multichannel surface electromyographic signals

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210977640.2A CN115050104B (en) 2022-08-16 2022-08-16 Continuous gesture action recognition method based on multichannel surface electromyographic signals

Publications (2)

Publication Number Publication Date
CN115050104A CN115050104A (en) 2022-09-13
CN115050104B true CN115050104B (en) 2022-11-25

Family

ID=83166679

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210977640.2A Active CN115050104B (en) 2022-08-16 2022-08-16 Continuous gesture action recognition method based on multichannel surface electromyographic signals

Country Status (1)

Country Link
CN (1) CN115050104B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116524572B (en) * 2023-05-16 2024-01-26 北京工业大学 Face accurate real-time positioning method based on self-adaptive Hope-Net
CN116849684B (en) * 2023-08-29 2023-11-03 苏州唯理创新科技有限公司 Signal source space positioning method of multichannel sEMG based on independent component analysis

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101482773A (en) * 2009-01-16 2009-07-15 中国科学技术大学 Multi-channel wireless surface myoelectric signal collection apparatus and system
CN107943283B (en) * 2017-11-08 2021-02-02 浙江工业大学 Mechanical arm pose control system based on gesture recognition
CN114647314A (en) * 2022-03-21 2022-06-21 天津大学 Wearable limb movement intelligent sensing system based on myoelectricity

Also Published As

Publication number Publication date
CN115050104A (en) 2022-09-13

Similar Documents

Publication Publication Date Title
CN115050104B (en) Continuous gesture action recognition method based on multichannel surface electromyographic signals
Zhang et al. Cascade and parallel convolutional recurrent neural networks on EEG-based intention recognition for brain computer interface
Fang et al. A multichannel surface EMG system for hand motion recognition
CN106963372B (en) A kind of brain electricity-electromyography signal fusing device and fusion method
CA2582451C (en) System and method for tracking facial muscle and eye motion for computer graphics animation
CN112353407B (en) Evaluation system and method based on active training of neurological rehabilitation
CN107553499A (en) Natural the gesture motion control system and method for a kind of Multi-shaft mechanical arm
Zou et al. A transfer learning model for gesture recognition based on the deep features extracted by CNN
CN101980106B (en) Two-dimensional cursor control method and device for brain-computer interface
CN110658915A (en) Electromyographic signal gesture recognition method based on double-current network
CN112518743B (en) Multi-mode neural decoding control system and method for on-orbit operation of space manipulator
CN112223288B (en) Visual fusion service robot control method
CN106227354A (en) A kind of brain-machine interaction donning system
CN111513991B (en) Active hand full-finger rehabilitation equipment based on artificial intelligence technology
CN113901881B (en) Myoelectricity data automatic labeling method
CN111259699A (en) Human body action recognition and prediction method and device
Naik et al. Subtle hand gesture identification for hci using temporal decorrelation source separation bss of surface emg
CN112207816A (en) Brain-controlled mechanical arm system based on view coding and decoding and control method
Leelakittisin et al. Compact CNN for rapid inter-day hand gesture recognition and person identification from sEMG
CN113729723B (en) Electrocardiogram signal quality analysis method and device based on ResNet-50 and transfer learning
Wang et al. Research on multimodal fusion recognition method of upper limb motion patterns
Li et al. A new directional intention identification approach for intelligent wheelchair based on fusion of EOG signal and eye movement signal
Sîmpetru et al. Influence of spatio-temporal filtering on hand kinematics estimation from high-density EMG signals
Lu et al. LGL-BCI: A Lightweight Geometric Learning Framework for Motor Imagery-Based Brain-Computer Interfaces
Ban et al. Multifunctional robot based on multimodal brain-machine interface

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CB03 Change of inventor or designer information

Inventor after: Li Menghui

Inventor before: Jiang Hanjun

Inventor before: Li Menghui

CB03 Change of inventor or designer information