CN115291730A - Wearable bioelectric equipment and bioelectric action identification and self-calibration method - Google Patents

Wearable bioelectric equipment and bioelectric action identification and self-calibration method Download PDF

Info

Publication number
CN115291730A
CN115291730A CN202210963154.5A CN202210963154A CN115291730A CN 115291730 A CN115291730 A CN 115291730A CN 202210963154 A CN202210963154 A CN 202210963154A CN 115291730 A CN115291730 A CN 115291730A
Authority
CN
China
Prior art keywords
module
action
angle
rotation angle
degrees
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210963154.5A
Other languages
Chinese (zh)
Other versions
CN115291730B (en
Inventor
冯立辉
陈威
卢继华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN202210963154.5A priority Critical patent/CN115291730B/en
Publication of CN115291730A publication Critical patent/CN115291730A/en
Application granted granted Critical
Publication of CN115291730B publication Critical patent/CN115291730B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4007Interpolation-based scaling, e.g. bilinear interpolation

Abstract

The invention belongs to the technical field of bioelectrical information processing and action recognition, and relates to wearable bioelectrical equipment and a bioelectrical action recognition and self-calibration method. The device comprises an acquisition module, a signal processing module, an STL module, a convolutional neural network and an FTL module; the STL module performs spatial transformation on the rotation angle of the unknown body on the polar coordinate radar image according to affine inverse transformation, and performs rotation, cutting and scaling to obtain a corrected feature image and a spatial transformation rotation angle; the STL module outputs the corrected feature map, enters a convolutional neural network and outputs an action probability value; the FTL module determines the rotation angle of the corrected characteristic diagram in the range of the adjacent electrodes according to the action probability value, namely, the fine tuning rotation angle; adding the fine tuning rotation angle to the space transformation rotation angle to obtain a calibration angle; and giving an action recognition result according to the maximum value of the same action probability value sum. The equipment and the method can resist individual difference, make up equipment wearing deviation and improve action recognition robustness.

Description

Wearable bioelectric device and bioelectric action recognition and self-calibration method
Technical Field
The invention belongs to the technical field of bioelectricity information processing and action recognition, and particularly relates to wearable bioelectricity equipment and an action recognition and self-calibration method based on bioelectricity.
Background
With the popularity of the metastic, virtual reality, and digital human applications, more and more consumers enter the virtual world through VR/AR Headset to play games, watch movies, teleconferences, and even virtual socialization. Many world-wide companies such as microsoft, meta, bytedance, tencent, etc. have increased capital investment in this area of technology, which may be the next generation of internet, to drive virtual reality-based technology development. When the virtual object is interacted with, the practicability of the interaction is very important.
Currently, wearable interaction aspects include primarily bare-hand interaction and motion capture based on deploying IMU sensors. Motion capture includes motion recognition based on fiber optic sensors, piezoelectric sensors, and various bioelectricity. Bioelectricity includes Surface Electromyogram (sEMG), electrocardiograms, electroencephalogram, and the like. Each technical route has advantages and disadvantages, and respective advantages are exerted in different scenes. Naked-hand interaction based on vision often limits an operation area within the FOV range, and the scene has the defects of inaccurate prediction and low precision, and meanwhile, high power consumption of equipment is one of the factors limiting popularization; glove-type motion capture is inconvenient for a user to wear for a long time and in multiple scenes; the sEMG is expected to be worn on a user in the form of a smart watch, and the sEMG also allows outdoor control, and is low in power consumption and capable of supporting long-time cruising.
In the existing bioelectricity action recognition products and researches, a multi-view bracelet which is made of a CNN framework and based on sparse surface electromyogram electrode arrays is adopted, the frame aggregates surface electromyograms collected successively, and the most reliable characteristics are selected to improve the classification precision. In addition, replacing sparse surface myoelectric electrodes with high-density electrodes is also one of the strategies for improving the identification precision. And (4) converting the electromyographic signals into a series of stable signals by considering different dimension information and a method based on primitive decomposition, and establishing a gesture recognition model by using RCNN for recognition. In addition, in order to improve the training efficiency, the transfer learning and the combination learning are combined, for example: CNN and LSTM are identified jointly. In addition, the myoelectric signals are characterized into two-dimensional bioelectricity signal images, then image data are serialized, fusion characteristics of signals of an accelerometer and surface myoelectric electrodes are extracted, and a GRNN model is adopted for training and recognition. And a Spiking Neural Network (SNN) is adopted for identification, so that the method can save power consumption and has high classification accuracy. And also employs a resistant neural network to recognize the myoelectric signal gestures. In the aspect of feature sets, three types of spatial feature sets are provided, and histogram gradient oriented (HOG) and Support Vector Machine (SVM) are used for realizing high-accuracy classification and identification.
However, in practical applications, it is not sufficient to perform only gesture recognition. For example, in an interactive virtual scene of virtual reality, a user needs to know a correct posture to avoid holding a prop in an inconsistent direction, so that the user moves to an opposite direction or makes an opposite action in the virtual scene. Calibration is therefore of vital importance for electrode-based wearable devices, and much interest is also being paid to researchers. Position estimation and adaptive correction based on the Active Polar Angle (APA) have been proposed, by calculating the Mean Absolute Value (MAV) of the bioelectrical data obtained for 8 electrodes, based on the APA and the calibration armband rotation in a polar coordinate system, and then classifying 8 gestures by a pre-trained support vector machine. A classification based on the muscle core region and the CNN-LSTM mixture model was also proposed by researchers. An electromyogram enhancement strategy is also provided, electrode displacement is realized through median filtering and interpolation, after data enhancement, gesture recognition is realized through a dilation convolution neural network DCNN, and the method realizes real-time gesture recognition under the electrode motion state based on a conductivity profile and an anatomical principle. There is also a method of recognizing forearm surface myoelectricity and rotation correction by muscle activation source, which selects an optimal classifier to recognize a position and a gesture by a user performing an anchor gesture based on a position recognition PI.
However, the existing wearable bioelectricity equipment cannot meet the requirement of wearing and using immediately, and has the defects of low identification precision caused by inaccurate wearing and inaccurate prediction caused by individual difference and the like. In addition, the more the number of acquisition channels is, the more the information is, the higher the identification precision is, but the problems are limited by space and cost considerations, the number of acquisition units cannot be too many, and the contradiction between the precision and the number of acquisition units needs to be solved.
Disclosure of Invention
The invention aims to provide wearable bioelectrical equipment and a bioelectrical action recognition and self-calibration method, which can improve action recognition precision, resist deviation influence caused by individual difference and improve recognition robustness.
In order to achieve the purpose, the invention adopts the following technical scheme:
the wearable bioelectric equipment comprises an acquisition module, a signal processing module and a convolutional neural network, and is characterized by further comprising an STL module and an FTL module; the system comprises a signal processing module, an STL module, a convolutional neural network and an FTL module, wherein the signal processing module, the STL module, the convolutional neural network and the FTL module are sequentially connected; the acquisition module is used for acquiring and primarily processing the biological electric signal; the signal processing module is used for preprocessing the signals acquired and preliminarily processed by the acquisition module to obtain a polar coordinate radar chart; the STL module is used for carrying out end-to-end spatial transformation on the polar coordinate radar chart under the condition of unknown body rotation angle according to affine inverse transformation, and carrying out rotation, cutting and scaling to obtain a corrected feature chart and a spatial transformation rotation angle; the convolutional neural network receives and classifies the corrected feature map, and outputs an action probability value according to a classification result and a data set; and the FTL module selects the action corresponding to the maximum value in the action probability values as an identification result.
In an optional implementation manner, the STL module specifically includes an image transformation unit and a sampler, where the image transformation unit includes a positioning network and an affine transformation unit, the positioning network is a fully connected network plus a regression layer or a CNN plus a regression layer, and outputs a predicted affine transformation matrix from an input polar radar map through a plurality of hidden layers; the affine transformation unit calculates the coordinate position of the transformed feature map coordinate position corresponding to the input polar coordinate radar map based on an affine transformation matrix predicted by a positioning network; and the sampler acquires corresponding pixel values according to the coordinate position of the polar coordinate radar chart determined by the affine transformation unit and fills the transformed characteristic chart to obtain a corrected characteristic chart.
In an optional implementation manner, the sampler acquires corresponding pixel values by means of bilinear interpolation, and fills the transformed feature map.
In an optional implementation manner, the number of the acquisition modules is N, and N is greater than or equal to 2.
In an optional embodiment, the action probability value includes an action probability value p corresponding to an initial position, i.e. a wearing angle of 0 degree i 0 And the wearing angle deviates from the initial position theta m Degree-corresponding action probability value
Figure BDA0003793656080000031
i represents a certain action numbered i; m is the number of possible wearing deflection angles between adjacent acquisition modules preset by the equipment; FTL module sums each action probability
Figure BDA0003793656080000041
Selection of P i And the action corresponding to the medium maximum value is the recognition result.
In an optional implementation manner, the FTL module further calculates a fine tuning rotation angle θ ft The method specifically comprises the following steps:
Figure BDA0003793656080000042
wherein N is the number of acquisition modules; and adding the spatial transformation rotation angle and the fine adjustment rotation angle to obtain a self-calibration angle.
In an optional embodiment, the preprocessing includes two parts, namely data preprocessing and data image conversion, and the data preprocessing includes FIR filtering, regularization and envelope extraction.
In an optional implementation mode, the wearable bioelectric device further comprises a communication module, and the communication module sends the signals acquired by the acquisition module and subjected to primary processing to the signal processing module.
A bioelectricity identification and self-calibration method is characterized by comprising the following steps:
s1, collecting a bioelectricity signal by a collecting module; s2, inputting the bioelectrical signal obtained in the step S1 into a communication module, and sending the bioelectrical signal into a signal processing module by the communication module; s3, the signal processing module carries out data preprocessing on the bioelectricity signal and then carries out data image conversion on the data after data preprocessing to obtain a polar coordinate radar chart; s4, performing end-to-end spatial transformation on the polar coordinate radar chart under the condition of unknown body rotation angle according to affine inverse transformation, and performing rotation, cutting and scaling to obtain a corrected feature chart and a spatial transformation rotation angle theta st (ii) a S5, the corrected feature map enters a convolutional neural network for classification, and an action probability value is output according to a classification result and a data set; s6, adding the probability values of the same action at a preset initial position and a plurality of deflection angles deviating from the initial position in the action probability values to obtain the sum of the probabilities of all actions, and taking the probability sum of the maximum action as an action identification result; meanwhile, estimating the fine tuning rotation angle theta according to the initial position and the action probability value of the initial position and a plurality of deflection angles deviating from the initial position and action probability values of corresponding deflection angles ft I.e. the rotation angle of the wearing angle in the angle between the adjacent acquisition modules, to obtain a self-calibration angle theta calib =θ stft
In an optional embodiment, step S4 is specifically implemented by an STL module, where the STL module specifically includes an image transformation unit and a sampler, the image transformation unit includes a positioning network and an affine transformation unit, the positioning network is a fully-connected network plus a regression layer or a CNN plus a regression layer, and outputs a predicted affine transformation matrix from an input polar radar map through a plurality of hidden layers; the affine transformation unit calculates the coordinate position of the transformed feature map coordinate position corresponding to the input polar coordinate radar map based on an affine transformation matrix predicted by a positioning network; and the sampler acquires corresponding pixel values according to the coordinate position of the polar coordinate radar chart determined by the affine transformation unit and fills the transformed characteristic chart to obtain a corrected characteristic chart.
In an optional embodiment, before the step S4, a step of constructing a data set by using a polar radar chart, and dividing the data set into a training set and a test set is further included.
In an alternative embodiment, the fine adjustment of the rotation angle θ is performed ft The method specifically comprises the following steps:
Figure BDA0003793656080000051
wherein N is the number of acquisition modules, p i 0 Is the action probability value corresponding to the initial position, namely the wearing angle of 0 degree,
Figure BDA0003793656080000052
deviating from the initial position theta for the wearing angle m The action probability value corresponding to the degree, i represents a certain action with the number i; m is the preset number of possible wearing deflection angles between adjacent acquisition modules.
Advantageous effects
The invention relates to a motion recognition and self-calibration method of wearable bioelectricity equipment, which is based on the translation, rotation and scaling characteristics of affine inverse transformation, is combined with a convolutional neural network, further improves the accuracy of classification, is further dedicated to realizing automatic calibration and improving the robustness of motion recognition, and improves the experience of wearing and using of a user, and compared with the existing motion recognition method, the method has the following beneficial effects:
1. individual differences can be resisted, the deviation influence caused by wearing randomness of the wearable bioelectricity equipment of the user is compensated, and the robustness of motion recognition is improved;
2. a series of calibrations are not needed before the user wears the glasses, so that the glasses can be worn and used by the user conveniently;
3. the method can overcome the resolution limit of the sparse electrode and realize the calibration of the high-precision rotation angle.
Drawings
FIG. 1 is a schematic diagram of a process for motion recognition and self-calibration of a wearable bioelectrical device according to the present invention;
FIG. 2 is a schematic diagram of a wearable bioelectrical device according to the present invention;
FIG. 3 is a block diagram of the composition and operation of a wearable bioelectrical device STL module according to the present invention;
FIG. 4 is a polar radar plot before and after pretreatment according to the present invention;
FIG. 5 is a diagram illustrating standard left-swipe, right-swipe and click actions for motion recognition according to the present invention;
fig. 6 is an initial wearing schematic diagram of a wearable bioelectrical device of the present invention as an sEMG bracelet;
fig. 7 is a schematic diagram of the muscle distribution of each angle and a corresponding polar coordinate radar chart when the wearable bioelectric device (sEMG bracelet) of the present invention is worn in a click operation;
FIG. 8 is a flow chart of a method of motion recognition and self-calibration of a wearable bioelectrical device in accordance with the present invention;
FIG. 9 is an exemplary polar radar chart before and after affine transformation is utilized in the method for motion recognition and self-calibration of a wearable bioelectric device according to the present invention;
fig. 10 is a wearing schematic diagram of a wearable bioelectric device of the present invention as a sEMG leg ring.
Detailed Description
The method for recognizing the action and self-calibrating the wearable bioelectrical device according to the present invention will be further described and illustrated in detail with reference to the accompanying drawings and embodiments.
The wearable bioelectrical device and the bioelectrical action identification and self-calibration method can be applied to a plurality of scenes. Such as gesture recognition in various scenes, and actions agreed in advance, such as: sign languages such as collection, passage, hold, and the like; further, the recognition of the finger and leg motion may be performed.
Example 1
This example illustrates a wearable bioelectrical device embodiment of the present invention, which may be a bracelet, a foot ring, a head-mounted device (such as a neck collar, a face mask, or a face mask), and fig. 6 and 10 show an embodiment of a sEMG bracelet and a leg ring, respectively.
As shown in fig. 1-2, the bioelectric device includes an acquisition module, a communication module, a signal processing module, an STL module, a convolutional neural network, and an FTL module;
the STL in the STL module is an abbreviation of a Spatial transform Layer, and the FTL in the FTL module is an abbreviation of a Fine Tuning Layer;
the acquisition module is connected with the communication module, the communication module is connected with the signal processing module, and the signal processing module, the STL module, the convolutional neural network and the FTL module are sequentially connected; the acquisition module acquires and preliminarily processes the bioelectrical signal and comprises an acquisition circuit, a filter and an amplifier; in specific implementation, the functions of filtering and amplifying can be realized simultaneously, and when the acquired signal value is large enough, filtering can be performed first and then amplifying can be performed; on the contrary, when the amplitude value of the acquired signal is very low, the signal is amplified first and then filtered; and obtaining signals after the initial processing of the acquisition module.
The communication module sends the signals after the initial processing of the acquisition module to the signal processing module; the signal processing module is used for preprocessing signals transmitted by the communication module to obtain a polar coordinate radar chart, and the polar coordinate radar chart is used as a characteristic chart and transmitted to the STL module. In specific implementation, the preprocessing comprises two parts of data preprocessing and data image conversion, wherein the data preprocessing comprises FIR filtering, regularization and envelope extraction operation;
as shown in fig. 4, the left side is 8-channel data acquired by 8 acquisition modules, that is, a radar map expressed by the 8-channel data before preprocessing in a polar coordinate form, in the map, C-1, C-2, C-3, C-4, C-5, C-6, C-7, and C-8 correspond to 0 degree, 45 degrees, 90 degrees, 135 degrees, 180 degrees, 225 degrees, 270 degrees, and 315 degrees, respectively; taking 0 degree as a center, and sequentially connecting the data values extending towards 8 directions to form a radar map; taking an 8-electrode bracelet as an example, assuming that a sampling rate is 1KHz (a sampling module circuit determines that, in general, the sampling rate or the sampling period is less than or equal to 1 ms), 8 data can be acquired by corresponding electrodes in 8 acquisition modules within 1ms, each electrode acquires one data, and data of each channel is expressed in a polar coordinate form and is represented by a graph, namely a polar coordinate radar chart.
The signal processing module receives 8-channel data transmitted by the acquisition module through the communication module, and performs preprocessing, wherein the preprocessing comprises two parts, namely data preprocessing and data image conversion, and a polar coordinate radar chart is obtained after preprocessing, and is also a characteristic chart input to the STL module (fig. 4, the right chart is specifically implemented by using a gray scale chart with 84 × 84 pixels to represent the characteristic chart). The left graph in fig. 4 is a polar radar map formed by data before preprocessing, and the right graph in fig. 4 is a polar radar map formed by data after preprocessing. As can be seen from the figure, the data values on the preprocessed radar chart are more concentrated, and the characteristics are more obvious, so that the radar chart is easier to identify.
Fig. 3 is a composition and work flow diagram of the STL module. The STL module can be seen to comprise an image transform unit and a sampler; the image transformation unit comprises a positioning network and an affine transformation unit; the STL module inputs the preprocessed polar coordinate radar map, namely the feature map input into the STL module, into an image transformation unit and a sampler, wherein the image transformation unit comprises a positioning network and an affine transformation unit, the specific implementation process comprises two parts of generating a prediction space transformation parameter and carrying out affine inverse transformation, the positioning network is a fully-connected network plus a regression layer or a CNN plus a regression layer, and the input polar coordinate radar map outputs a predicted affine transformation matrix through a plurality of hidden layers; the affine transformation unit calculates the coordinate position of the transformed feature map coordinate position corresponding to the input polar coordinate radar map (namely the original feature map) based on an affine transformation matrix predicted by a positioning network; and the sampler acquires the pixel values of the original characteristic diagram to fill the converted characteristic diagram in a bilinear interpolation mode according to the coordinate position of the original characteristic diagram determined by the affine transformation unit. Meanwhile, the output space changes the rotation angle.
In particular, the STL module is used for rotating, cutting and scaling the feature map. The rotation, shearing and scaling are achieved by an affine inverse transformation of equation (1):
Figure BDA0003793656080000081
wherein ,
Figure BDA0003793656080000082
is an affine transformation matrix, T matrix for short, the matrix contains 6 parameters),
Figure BDA0003793656080000083
are the coordinates of the feature map after the affine inverse transformation,
Figure BDA0003793656080000084
coordinates of the feature map before the affine inverse transformation; the coordinates corresponding to the input feature map (i.e. the preprocessed polar coordinate radar map, also called the original feature map, i.e. the right map of fig. 4) is the space transformation of the feature map after the original feature map is subjected to affine inverse transformation, the space transformation can realize the clipping, translation, rotation, scaling and inclination input feature mapping, and only 6 parameters (T) need to be generated 11 、T 12 、T 13 、T 21 、T 22 And T 23 6 elements). Left 2 × 2 sub-matrix (T) 11 、T 12 、T 21 And T 22 Formed matrix) is less than 1, the spatial transformation is a contraction when the spatial transformation is rotated, the expression of the affine inverse transformation is:
Figure BDA0003793656080000091
wherein a and d are scaling parameters, and c and b are clipping parameters;
Figure BDA0003793656080000092
when the affine transformation matrix is used for concrete implementation, the space transformation from the input characteristic diagram to the output characteristic diagram can be carried out for more than or equal to 1 time, each time the affine transformation matrix is taken as 1 layer, and if the affine transformation matrix is taken as l layers, theta is l l-1 Is the spatial transformation rotation angle from the input feature map (corresponding to layer l-1) to the output feature map (corresponding to layer l). The value range of l is more than or equal to 1; preferably, this embodiment l is 2. If l is equal to 1, then θ is directly output l l-1 I.e. theta 0 1 For spatially varying angle of rotation theta st (ii) a If l is equal to 2, layer 1 is inputTheta of out 0 1 And output of layer 2 theta 1 2 Adding to obtain total space transformation rotation angle theta st
In another implementation, different from the foregoing scaling and clipping, with a more constrained attention mechanism, the affine transformation matrix is expressed as:
Figure BDA0003793656080000093
in practice, by changing T 11 、T 13 And T 23 The method realizes cutting, translation and isotropic scaling without rotation, and in specific implementation, the output space is converted by a rotation angle theta st Is 0.
In specific implementation, the STL module forwards propagates the incoming polar coordinate radar chart (corresponding to the first row of the chart in fig. 3) to obtain an affine transformation matrix, and then corrects the radar chart according to the affine transformation matrix in three aspects of rotation, clipping and scaling to obtain a corrected feature chart (corresponding to the second row of the chart in fig. 3), and meanwhile, the STL module extracts a spatial transformation rotation angle from the affine transformation matrix for correction.
The first row of fig. 3 shows the click operation (see fig. 5), and the wearing angles are rotated counterclockwise by 0 °,45 °, 90 °, 135 °, 180 °, 225 °, 270 ° and 315 °, it can be seen that the characteristic diagram is rotated around the center by a corresponding angle according to the wearing angles; however, comparing the input and output of the STL module, it can be seen that the corrected signature (corresponding to the second row of fig. 3) almost all coincided with the direction indicated by the wearing angle 0 °, indicating that the STL module can achieve a coarse calibration over a large angle. Because of the limited space, the feature maps before and after correction have low resolution, and fig. 9 details the difference between the input (left part of fig. 9) and output (right part of fig. 9) STL module feature maps. As can be seen from fig. 9, the radar maps of a certain action at different angles collected by the left map have larger difference; but after the rotation transformation of the STL module, the angles tend to be consistent, and a plurality of radar maps in the right image are more similar to those in the left image.
The STL module outputs the corrected feature map, enters a Convolutional Neural Network (CNN) for classification, and outputs the feature mapAnd (6) outputting action probability. The output action probability enters an FTL module and an action classification result is output; meanwhile, the FTL module calculates a fine tuning rotation angle theta according to the action probability output by the CNN ft And obtaining a final action recognition result.
The STL module and the convolutional neural network can be 1 or more or adjusted in sequence.
Regarding the action probability, for the system comprising N acquisition modules, each acquisition module corresponds to one electrode, that is, the number of electrodes is N, the FTL module is configured to estimate the angular rotation of the feature map within "360 °/N" (angle between adjacent electrodes); the bracelet with N electrodes divides 360 degrees into N equal parts, and when the bracelet rotates at the interval of 360 degrees/N, the polar coordinate radar chart can be repeated, but in the rotating process of 360 degrees/N, the polar coordinate radar chart can be changed and cannot be kept unchanged. This can also be explained from fig. 7, which illustrates 8 electrodes, within 45 degrees: radar patterns of 15 degrees, 22.5 degrees and 30 degrees are different from radar patterns of 0 degrees and 45 degrees.
Further consider the effect of the angle of wear. For an angle of 360 °/N, subdivided into M parts (i.e. M is the number of possible wear deflection angles between adjacent electrodes preset by the device), the corresponding deflection angles are: theta 01 ,…,θ M-1. wherein ,pi 0 Corresponding to the probability when the wearing angle of a certain action is 0 degree (initial position),
Figure BDA0003793656080000101
are respectively deviated from the initial position theta corresponding to a certain action wearing angle 1 ,…,θ M-1 The FTL module sums each action probability, i.e.:
Figure BDA0003793656080000111
in which the subscript i represents a certain action, P, numbered i i In order to determine the total probability of an action,
Figure BDA0003793656080000112
to a certain movementAs worn at an angle theta m Probability of time, selecting P i The Gesture corresponding to the maximum value in the group is the recognition result Gesture = max (P) 1 ,…P i ,P K )。
For a certain action, the fine tuning rotation angle theta output by the FTL module ft The calculation formula of (c) is:
Figure BDA0003793656080000113
wherein ,
Figure BDA0003793656080000114
indicates that a certain action wearing angle is theta m Probability of time, and θ 12 <…<θ M-1
Figure BDA0003793656080000115
M angular probabilities representing an action
Figure BDA0003793656080000116
When the minimum value of the two values is located at two ends, i.e. corresponding to i =0 or M-1, i =0 corresponding to the initial position of the electrode, and M-1 corresponding to the position closest to the adjacent electrode in M angles, the rotation angle θ is finely adjusted ft Equal to the sum of the products of all the angle probabilities and the corresponding angles; if not, then,
Figure BDA0003793656080000117
m angular probabilities representing a certain motion
Figure BDA0003793656080000118
When the minimum value of (a) is not located at two ends, i.e. not corresponding to the i =0 or M-1 position, but is located at a position far away from the initial position and the minimum offset of the adjacent electrodes, the fine tuning rotation angle θ is then adjusted ft Equal to the sum of the products of all the angle probabilities and the corresponding angles and the sum of the products
Figure BDA0003793656080000119
STL module converts output space by rotation angle theta st And fine tuning the rotation angle theta ft Adding to obtain a self-calibration angle theta calib And theta calib =θ stft
In a specific embodiment, when M is equal to 3, the N electrodes are separated by 120 degrees/N and then divided by 3 equally, and the rotation angle theta is finely adjusted ft Equal to:
Figure BDA0003793656080000121
the meaning of this formula is: the minimum value min (p) of the probability that the convolutional neural network output wearing angle is 0 degrees, 120/N degrees and 240/N degrees is obtained 0 ,p 120/N ,p 240/N ) (ii) a When the minimum value min (p) 0 ,p 120/N ,p 240/N ) When the wearing angle is 0 DEG or 240/N DEG, i.e. p 0 or p 240/N Fine adjustment of the rotation angle theta ft The probability of the wearing angle being 120/N degrees is multiplied by 120/N degrees, and the probability of the wearing angle being 360 degrees/N degrees is multiplied by 360/N degrees; otherwise, when the probability minimum value corresponds to the wearing angle of 120 degrees/N, the rotation angle theta is finely adjusted ft The probability of the wearing angle is 120 degrees/N multiplied by 120/N, the probability of the wearing angle is 240 degrees/N multiplied by 240 degrees/N corresponding radians, and the probability of the wearing angle is 0 degrees multiplied by 360 degrees/N are added.
In another embodiment, if the number of electrodes N is 8, M is 3, and 3 gestures, left slide, click, and right slide are recognized, as shown in fig. 5.
The feature map corrected by the STL module is sent to a convolutional neural network for classification, and a string of action probabilities is output through the convolutional neural network:
Figure BDA0003793656080000122
wherein ,
Figure BDA0003793656080000123
the probability of clicking actions with a bracelet wearing angle of 0 degree;
Figure BDA0003793656080000124
the probability of clicking the bracelet with the wearing angle of 15 degrees,
Figure BDA0003793656080000125
The probability of clicking the bracelet with the wearing angle of 30 degrees,
Figure BDA0003793656080000126
The probability of left-sliding motion with the wearing angle of the hand ring being 0 degree,
Figure BDA0003793656080000127
The probability of the left-sliding motion with the wearing angle of the hand ring being 15 degrees,
Figure BDA0003793656080000128
The probability of left-sliding motion with the wearing angle of the hand ring of 30 degrees,
Figure BDA0003793656080000129
The probability of the right-sliding motion with the wearing angle of the hand ring being 0 degree,
Figure BDA00037936560800001210
The probability of right-sliding motion with the wearing angle of the hand ring being 15 degrees,
Figure BDA00037936560800001211
The probability that the wearing angle of the bracelet is 30 degrees of right-sliding action is provided.
The FTL module sums each gesture probability, i.e.:
Figure BDA0003793656080000131
Figure BDA0003793656080000132
Figure BDA0003793656080000133
in the formula ,pl To determine the total probability of a left-slip action, p r Total probability of right-slip action, p c For the total probability of click, superscripts 0, 15 and 30 are corresponding angles;
three probability sums p are selected l 、p r And p c The gesture corresponding to the maximum value of (b) is the recognition result, that is: gesture = max (p) l ,p r ,p c )。
Further, the fine tuning rotation angle theta output by the FTL module ft The calculation formula of (a) is as follows:
Figure BDA0003793656080000134
the meaning of this formula is: calculating the minimum value min (p) of the probability that the output wearing angle of the convolutional neural network is 0 degrees, 15 degrees and 30 degrees 0 ,p 15 ,p 30 ) (ii) a When minimum value min (p) 0 ,p 15 ,p 30 ) When the wearing angle is 0 DEG or 30 DEG, i.e. p 0 or p 30 Fine adjustment of the rotation angle theta ft Equal to the probability of the wearing angle being 15 degrees multiplied by the corresponding radian of 15 degrees plus the probability of the wearing angle being 30 degrees multiplied by the corresponding radian of 30 degrees; when the minimum value is the probability of a wearing angle of 15 deg., i.e. p 15 Fine adjustment of the rotation angle theta ft The sum of the probability that the wearing angle is 15 degrees multiplied by the corresponding radian of 15 degrees, the probability that the wearing angle is 30 degrees multiplied by the corresponding radian of 30 degrees and the probability that the wearing angle is 0 degrees multiplied by the corresponding radian of 45 degrees is equal to the sum; spatial transformation rotation angle theta of STL module output st And fine tuning the rotation angle theta ft Adding to obtain a self-calibration angle theta calib ,θ calib =θ stft
Example 2
Taking the bracelet as an example, the sEMG bracelet is worn on the forearm of the subject, the initial wearing position is as shown in fig. 6, corresponding to the palm being flat, and the position where the flexor muscles of the radial wrist are right opposite is the initial wearing position, corresponding to 0 °.
Fig. 8 is a flow chart of a bioelectricity-based identification and self-calibration method of the present invention, which specifically includes the following steps:
s1, collecting a bioelectricity signal by a collecting module;
s2, inputting the bioelectric signals obtained in the S1 into a communication module, and sending the bioelectric signals into a signal processing module by the communication module;
and S3, the signal processing module performs data preprocessing (filtering, normalization processing and envelope calculation) on the bioelectrical signal, and then converts the bioelectrical signal to obtain a polar coordinate radar chart.
S4, performing end-to-end spatial transformation on the polar coordinate radar chart under the condition of unknown body rotation angle according to affine transformation, and performing rotation, cutting and scaling to obtain a corrected feature chart and a spatial transformation rotation angle theta st
S5, the corrected feature map enters a convolutional neural network for classification, and classification results with the action probabilities as the characteristics are output;
s6, adding probability values of the same action at a preset initial position and a plurality of deflection angles deviating from the initial position in the action probability values to obtain the probability sum of all actions, and taking the probability sum of the maximum action as an action identification result;
meanwhile, estimating the fine tuning rotation angle theta according to the initial position and the action probability value of the initial position, and a plurality of deflection angles deviating from the initial position and action probability values of corresponding deflection angles ft I.e. the rotation angle of the wearing angle in the angle between the adjacent acquisition modules, to obtain a self-calibration angle theta calib =θ stft
wherein ,θft The rotation angle is finely adjusted, and the rotation problem in the range of the coverage angle of the adjacent acquisition modules is solved, and the rotation angle is calculated according to the multiplication of each angle and the similarity. The fine tuning rotation angle is the rotation angle in the adjacent sampling module, which is a more refined estimation of the wearing angle based on the wearing rotation angle obtained in step 4. Theta ft An optional calculatorSee example 1 for methods.
Further, the step S4 is specifically implemented by an STL module, where the STL module includes an image transformation unit and a sampler; the image transformation unit comprises a positioning network and an affine transformation unit, wherein the positioning network is a fully-connected network plus a regression layer or a CNN plus regression layer, and an input feature map outputs a predicted affine transformation matrix through a plurality of hidden layers; the affine transformation unit calculates the coordinate position of the feature map coordinate position after affine inverse transformation corresponding to the input feature map (original feature map) based on the affine transformation matrix predicted by the positioning network; and the sampler acquires the feature map after the pixel value filling transformation of the original feature map by adopting a bilinear interpolation mode according to the coordinate position of the original feature map determined by the affine transformation unit. Meanwhile, the output space changes the rotation angle.
Further, before step S4, a data set is constructed by using the polar coordinate radar chart, and the data set is divided into a training set and a test set.
In a specific embodiment, the generation process of the training set and the test set specifically includes: a plurality of subjects wearing the equipment acquire data of specified actions at different wearing angles of the bracelet on the forearm; for example: when 8 electrodes are used, for a wearing angle of 0 degrees (for example, when the palm of the figure 6 is spread flatly, the flexor carpi radialis is over against the electrode No. 1), data of wearing angles of 0 degrees, 15 degrees, 22.5 degrees and 30 degrees are collected (22.5 degrees is only used for illustration, when the 8-electrode convolutional neural network is used for classifying the angles between the electrodes of 3 equal divisions, the intermediate angle is not existed), and the collected data are placed in a data folder corresponding to the wearing angle of 0 degrees; by analogy, for a 45-degree wearing angle, collecting data of 45 degrees, 60 degrees and 75-degree wearing angles, and placing the collected data in a data folder corresponding to the 45-degree wearing angle; for a wearing angle of 90 degrees, collecting data of the wearing angles of 90 degrees, 105 degrees and 120 degrees, and placing the collected data in a data folder corresponding to the wearing angle of 90 degrees; for a 135-degree wearing angle, collecting data of 135 degrees, 150 degrees and 165 degrees wearing angles, and placing the collected data in a data folder corresponding to the 135-degree wearing angle; for a wearing angle of 180 degrees, collecting data of the wearing angles of 180 degrees, 195 degrees and 210 degrees, and placing the collected data in a data folder corresponding to the wearing angle of 180 degrees; for a 225-degree wearing angle, acquiring data of 225 degrees, 240 degrees and 255 degrees, and placing the acquired data in a data folder corresponding to the 225-degree wearing angle; for a wearing angle of 270 degrees, collecting data of wearing angles of 270 degrees, 285 degrees and 300 degrees, and placing the collected data in a data folder corresponding to the wearing angle of 270 degrees; for the wearing angle of 315 degrees, data of the wearing angles of 315 degrees, 330 degrees and 345 degrees are collected, and the collected data are placed in a data folder corresponding to the wearing angle of 315 degrees.
In specific implementation, the actions are collected according to a circuit sampling rate (the sampling rate of the collection circuit module is greater than or equal to 1 KHz), each action is executed for 1 second, the pause is carried out for 2 seconds, each subject repeats the series of actions for multiple times, and multiple data (such as 5000 times) with different angles (corresponding to 0 degrees, 15 degrees and 30 degrees) are collected together; after the motion data is acquired, the acquired sample is expanded in a manner that the acquired polar coordinate radar chart is subjected to multiple rotation expansion (for example, the polar coordinate radar chart is rotated for 7 times by 45 degrees, namely the polar coordinate radar chart is respectively rotated to 45 degrees, 60 degrees and 75 degrees, 90 degrees, 105 degrees and 120 degrees, 135 degrees, 150 degrees and 165 degrees, 180 degrees, 195 degrees and 210 degrees, 225 degrees, 240 degrees and 255 degrees, 270 degrees, 285 degrees and 300 degrees, 315 degrees, 330 degrees and 345 degrees at intervals of 45 degrees, namely the data volume after the rotation for 7 times by 45 degrees is 40000), and then the sample data is subjected to 8-degree: 1, dividing the training set and the test set. Fig. 7 lists 8 electrodes, in the 0 ° range: radar maps of 15 ° (corresponding to b in fig. 7), 22.5 ° (corresponding to c in fig. 7), 30 ° (corresponding to d in fig. 7) and radar maps of 0 ° (corresponding to a in fig. 7) and 45 ° (corresponding to e in fig. 7), it can be seen that the radar maps of 0 ° and 45 ° are identical in shape except for the angle difference of 45 degrees, which is therefore also the principle of expanding the data set. Actual measurement also shows that the graphs of the polar coordinate radar chart drawn by the actually measured data and the data set obtained by rotation expansion are almost consistent.
The scale of the training set and the test set ranged between 6:4 to 10:1, the training set is used for the implementation of the training phase, and the testing set is used for the action recognition of the evaluation phase. Wherein the flow of the training phase is: firstly, STL module forwards propagates the input characteristic diagram, and then outputs the rotation angle theta of space transformation st And correcting the characteristic diagram by affine transformation matrix parameters, sending the corrected characteristic diagram into a convolutional neural network for classification, performing back propagation according to labels of classes corresponding to class data in a data set, and updating weight parameters of the convolutional neural network. After multiple rounds of training, a trained convolutional neural network is obtained. The flow of the evaluation phase is as follows: testing and verifying the trained convolutional neural network by using a test set to obtain identification accuracy and a fine tuning rotation angle; adding the rotation angle of the spatial transformation and the fine tuning rotation angle to obtain a predicted rotation angle; the fit to the true value was about 98.62%.
The accuracy of the method is compared with the accuracy of the existing method for identifying by using CNN, the trained model is tested and verified by using a test set, the comparison result is shown in tables 1 and 2, and the fitting degree of the predicted rotation angle and the true value is about 98.62 percent.
TABLE 1 CNN vs. action recognition accuracy of the invention under extended and raw data sets
Figure BDA0003793656080000171
TABLE 2 CNN vs. accuracy of the invention in identifying gestures of different subjects under an augmented data set
Figure BDA0003793656080000172
Therefore, compared with CNN, the recognition effect of the invention is more accurate; the predicted rotation angle and the real value have high fitting degree, so that a series of complicated calibration operations required by a user to wear a bracelet can be avoided, and the bracelet can be worn and used by the user conveniently.
Example 3
Further, the method according to the present invention can be used for recognizing the movement of other parts depending on the wearing position, for example, the leg loop, and as shown in fig. 10, the movement includes standing upright or standing on tiptoeThe motion recognition and self-calibration of feet (or tight feet) and 45-degree bent feet. The left 3 figures in the figure show the leg rings in the upper part of the leg in black, and the right 3 figures show the leg rings in the lower part of the leg in black. The method comprises the following steps of collecting electrodes of bioelectricity signals in a collecting module, transmitting the bioelectricity signals to a signal processing module through a communication module, carrying out data preprocessing by the signal processing module to generate a polar coordinate radar chart, and carrying out batch conversion and manufacturing of polar coordinate radar chart data sets representing different actions. Two-dimensional affine matrix of
Figure BDA0003793656080000173
The form, then correspond to rotating and zooming; if it is
Figure BDA0003793656080000174
Less than 1 indicates a systolic transformation; a. d is a scaling parameter, c, b are clipping parameters; if so:
Figure BDA0003793656080000181
by changing T 11 、T 13 And T 23 Cropping, translation, and isotropic scaling are achieved.
While the foregoing is directed to the preferred embodiment of the present invention, it is not intended that the invention be limited to the embodiment and the drawings disclosed herein. It is intended that all equivalents and modifications which do not depart from the spirit of the invention disclosed herein are deemed to be within the scope of the invention.

Claims (10)

1. A wearable bioelectricity device comprises an acquisition module, a signal processing module and a convolutional neural network, and is characterized by further comprising an STL module and an FTL module; the system comprises a signal processing module, an STL module, a convolutional neural network and an FTL module, wherein the signal processing module, the STL module, the convolutional neural network and the FTL module are sequentially connected;
the acquisition module is used for acquiring and primarily processing the biological electric signal;
the signal processing module is used for preprocessing the signals acquired and preliminarily processed by the acquisition module to obtain a polar coordinate radar chart;
the STL module is used for carrying out end-to-end spatial transformation on the polar coordinate radar chart under the condition of unknown body rotation angle according to affine inverse transformation, and carrying out rotation, cutting and scaling to obtain a corrected characteristic chart and a spatial transformation rotation angle;
the convolutional neural network receives and classifies the corrected feature map, and outputs an action probability value according to a classification result and a data set;
and the FTL module selects the action corresponding to the maximum value in the action probability values as the identification result.
2. The wearable bioelectric device according to claim 1, wherein the STL module comprises an image transformation unit and a sampler, the image transformation unit comprises a positioning network and an affine transformation unit, the positioning network is a fully connected network plus a regression layer or a CNN plus regression layer, and an input polar radar map is output as a predicted affine transformation matrix through a plurality of hidden layers; the affine transformation unit calculates the coordinate position of the transformed feature map coordinate position corresponding to the input polar coordinate radar map based on an affine transformation matrix predicted by a positioning network; and the sampler acquires corresponding pixel values according to the coordinate position of the polar coordinate radar chart determined by the affine transformation unit and fills the transformed characteristic chart to obtain a corrected characteristic chart.
3. The wearable bioelectric device according to claim 1, wherein the action probability value comprises an action probability value p corresponding to an initial position (wearing angle) of 0 degrees i 0 And a wearing angle deviated from an initial position theta m Degree-corresponding action probability value
Figure FDA0003793656070000011
i represents a certain action numbered i; m is a device preThe number of possible wearing deflection angles between adjacent acquisition modules is set; FTL module sums each action probability
Figure FDA0003793656070000012
Selection of P i And the action corresponding to the medium maximum value is the recognition result.
4. The wearable bioelectrical device according to claim 3, wherein the FTL module further calculates a fine tuning rotation angle θ ft The method specifically comprises the following steps:
Figure FDA0003793656070000021
wherein N is the number of acquisition modules; and adding the space transformation rotating angle and the fine adjustment rotating angle to obtain a self-calibration angle.
5. The wearable bioelectric device according to claim 1, wherein said preprocessing comprises two parts of data preprocessing and data image conversion, said data preprocessing comprising FIR filtering, regularization and envelope extraction.
6. The wearable bioelectric device according to claim 1, further comprising a communication module, wherein the communication module sends the signal acquired and preliminarily processed by the acquisition module to the signal processing module.
7. A bioelectricity identification and self-calibration method is characterized by comprising the following steps:
s1, collecting a bioelectricity signal by a collecting module;
s2, inputting the bioelectrical signal obtained in the step S1 into a communication module, and sending the bioelectrical signal into a signal processing module by the communication module;
s3, the signal processing module carries out data preprocessing on the bioelectricity signal and then carries out data image conversion on the data preprocessed to obtain a polar coordinate radar chart;
s4, performing end-to-end spatial transformation on the polar coordinate radar chart under the condition of unknown body rotation angle according to affine inverse transformation, and performing rotation, cutting and scaling to obtain a corrected feature chart and a spatial transformation rotation angle theta st
S5, the corrected feature map enters a convolutional neural network for classification, and an action probability value is output;
s6, adding probability values of the same action at a preset initial position and a plurality of deflection angles deviating from the initial position in the action probability values to obtain the probability sum of all actions, and taking the probability sum of the maximum action as an action identification result;
meanwhile, estimating the fine tuning rotation angle theta according to the initial position and the action probability value of the initial position and a plurality of deflection angles deviating from the initial position and action probability values of corresponding deflection angles ft Namely the rotation angle of the wearing angle in the angle between the adjacent acquisition modules, and obtaining a self-calibration angle theta calib =θ stft
8. The bioelectricity behavior identification and self-calibration method according to claim 7, wherein the step S4 is implemented by an STL module, the STL module comprises an image transformation unit and a sampler, the image transformation unit comprises a positioning network and an affine transformation unit, the positioning network is a fully connected network plus a regression layer or a CNN plus a regression layer, and the input polar coordinate radar graph outputs a predicted affine transformation matrix through a plurality of hidden layers; the affine transformation unit calculates the coordinate position of the transformed feature map coordinate position corresponding to the input polar coordinate radar map based on an affine transformation matrix predicted by a positioning network; and the sampler acquires corresponding pixel values according to the coordinate position of the polar coordinate radar map determined by the affine transformation unit and fills the transformed characteristic map to obtain a corrected characteristic map.
9. The bioelectronic motion recognition and self-calibration method according to claim 7, further comprising the step of constructing a data set using polar radar mapping, and dividing the data set into a training set and a test set before step S4.
10. The bio-electro-kinetic recognition and self-calibration method of claim 7, wherein the fine tuning rotation angle θ ft The method specifically comprises the following steps:
Figure FDA0003793656070000031
wherein N is the number of acquisition modules, p i 0 The action probability value corresponding to the initial position, namely the wearing angle of 0 degree,
Figure FDA0003793656070000032
deviating from the initial position theta for the wearing angle m The action probability value corresponding to the degree, i represents a certain action with the number i; m is the preset number of possible wearing deflection angles between adjacent acquisition modules.
CN202210963154.5A 2022-08-11 2022-08-11 Wearable bioelectric equipment and bioelectric action recognition and self-calibration method Active CN115291730B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210963154.5A CN115291730B (en) 2022-08-11 2022-08-11 Wearable bioelectric equipment and bioelectric action recognition and self-calibration method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210963154.5A CN115291730B (en) 2022-08-11 2022-08-11 Wearable bioelectric equipment and bioelectric action recognition and self-calibration method

Publications (2)

Publication Number Publication Date
CN115291730A true CN115291730A (en) 2022-11-04
CN115291730B CN115291730B (en) 2023-08-15

Family

ID=83828936

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210963154.5A Active CN115291730B (en) 2022-08-11 2022-08-11 Wearable bioelectric equipment and bioelectric action recognition and self-calibration method

Country Status (1)

Country Link
CN (1) CN115291730B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106991372A (en) * 2017-03-02 2017-07-28 北京工业大学 A kind of dynamic gesture identification method based on interacting depth learning model
CN110658915A (en) * 2019-07-24 2020-01-07 浙江工业大学 Electromyographic signal gesture recognition method based on double-current network
WO2020041503A1 (en) * 2018-08-24 2020-02-27 Arterys Inc. Deep learning-based coregistration
US10679046B1 (en) * 2016-11-29 2020-06-09 MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V. Machine learning systems and methods of estimating body shape from images
US20200394413A1 (en) * 2019-06-17 2020-12-17 The Regents of the University of California, Oakland, CA Athlete style recognition system and method
CN112364757A (en) * 2020-11-09 2021-02-12 大连理工大学 Human body action recognition method based on space-time attention mechanism
CN112668662A (en) * 2020-12-31 2021-04-16 北京理工大学 Outdoor mountain forest environment target detection method based on improved YOLOv3 network

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10679046B1 (en) * 2016-11-29 2020-06-09 MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V. Machine learning systems and methods of estimating body shape from images
CN106991372A (en) * 2017-03-02 2017-07-28 北京工业大学 A kind of dynamic gesture identification method based on interacting depth learning model
WO2020041503A1 (en) * 2018-08-24 2020-02-27 Arterys Inc. Deep learning-based coregistration
US20200394413A1 (en) * 2019-06-17 2020-12-17 The Regents of the University of California, Oakland, CA Athlete style recognition system and method
CN110658915A (en) * 2019-07-24 2020-01-07 浙江工业大学 Electromyographic signal gesture recognition method based on double-current network
CN112364757A (en) * 2020-11-09 2021-02-12 大连理工大学 Human body action recognition method based on space-time attention mechanism
CN112668662A (en) * 2020-12-31 2021-04-16 北京理工大学 Outdoor mountain forest environment target detection method based on improved YOLOv3 network

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
F. NOUGAROUA 等: "Pattern recognition based on HD-sEMG spatial features extraction for an efficient proportional control of a robotic arm", 《BIOMEDICAL SIGNAL PROCESSING AND CONTROL》, vol. 53 *
SHU SHEN 等: "Movements Classification Through sEMG With Convolutional Vision Transformer and Stacking Ensemble Learning", 《IEEE SENSORS JOURNAL》, vol. 22, no. 13, XP011913119, DOI: 10.1109/JSEN.2022.3179535 *
YINGWEI ZHANG 等: "Learning Effective Spatial-Temporal Features for sEMG Armband based Gesture Recognition", 《IEEE INTERNET OF THINGS JOURNAL》, vol. 7, no. 8, XP011805469, DOI: 10.1109/JIOT.2020.2979328 *

Also Published As

Publication number Publication date
CN115291730B (en) 2023-08-15

Similar Documents

Publication Publication Date Title
Ghazaei et al. Deep learning-based artificial vision for grasp classification in myoelectric hands
CN104700433B (en) A kind of real-time body&#39;s whole body body motion capture method of view-based access control model and system thereof
CN111881887A (en) Multi-camera-based motion attitude monitoring and guiding method and device
CN111881705A (en) Data processing, training and recognition method, device and storage medium
KR20200024324A (en) Armband to track hand motion using electrical impedance measurement background
CN112069933A (en) Skeletal muscle stress estimation method based on posture recognition and human body biomechanics
CN106600626B (en) Three-dimensional human motion capture method and system
CN108376405B (en) Human motion capture system and method based on double-body sensation tracking system
US10445930B1 (en) Markerless motion capture using machine learning and training with biomechanical data
CN110544302A (en) Human body action reconstruction system and method based on multi-view vision and action training system
CN109598219B (en) Adaptive electrode registration method for robust electromyography control
CN107041585A (en) The measuring method of human dimension
CN109815776A (en) Action prompt method and apparatus, storage medium and electronic device
Guo et al. MCDCD: Multi-source unsupervised domain adaptation for abnormal human gait detection
EP3568831A1 (en) Systems, methods, and apparatuses for tracking a body or portions thereof
CN110298279A (en) A kind of limb rehabilitation training householder method and system, medium, equipment
CN112464915B (en) Push-up counting method based on human skeleton point detection
Loureiro et al. Using a skeleton gait energy image for pathological gait classification
CN112183314B (en) Expression information acquisition device, expression recognition method and system
CN109765996A (en) Insensitive gesture detection system and method are deviated to wearing position based on FMG armband
CN111881888A (en) Intelligent table control method and device based on attitude identification
CN112419479A (en) Body type data calculation method based on weight, height and body image
Albuquerque et al. A spatiotemporal deep learning approach for automatic pathological gait classification
JP2019003565A (en) Image processing apparatus, image processing method and image processing program
CN115050104A (en) Continuous gesture action recognition method based on multichannel surface electromyographic signals

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant