CN110389663B - sEMG gesture recognition method based on wavelet width learning system - Google Patents

sEMG gesture recognition method based on wavelet width learning system Download PDF

Info

Publication number
CN110389663B
CN110389663B CN201910548939.4A CN201910548939A CN110389663B CN 110389663 B CN110389663 B CN 110389663B CN 201910548939 A CN201910548939 A CN 201910548939A CN 110389663 B CN110389663 B CN 110389663B
Authority
CN
China
Prior art keywords
semg
signal
gesture
learning system
width learning
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910548939.4A
Other languages
Chinese (zh)
Other versions
CN110389663A (en
Inventor
林佳泰
刘治
章云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Technology
Original Assignee
Guangdong University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Technology filed Critical Guangdong University of Technology
Priority to CN201910548939.4A priority Critical patent/CN110389663B/en
Publication of CN110389663A publication Critical patent/CN110389663A/en
Application granted granted Critical
Publication of CN110389663B publication Critical patent/CN110389663B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/23Clustering techniques
    • G06F18/232Non-hierarchical techniques
    • G06F18/2321Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
    • G06F18/23213Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • G06V40/28Recognition of hand or arm movements, e.g. recognition of deaf sign language

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Multimedia (AREA)
  • Biomedical Technology (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Probability & Statistics with Applications (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a sEMG gesture recognition method based on a wavelet width learning system, which comprises the following steps: step one, defining the number d of gesture action types according to corresponding recognition scenes; step two, an electromyographic signal acquisition device is used for acquiring sEMG signals s; step three, filtering and denoising the sEMG original signal according to the frequency characteristic of the sEMG signal, and removing noise outside the frequency band of 10 Hz-500 Hz by using a Butterworth filter; detecting an active segment in the sEMG signal by adopting a moving window method; step five, extracting the characteristics of the detected movable section; compared with the algorithm based on the traditional deep learning network, the method can complete training of the model and determination of parameters more quickly, so that the working efficiency is improved; nodes can be dynamically expanded to increase the recognition rate of the system without the need to fully re-build and train the model.

Description

sEMG gesture recognition method based on wavelet width learning system
Technical Field
The invention relates to the technical field of machine learning and signal classification, in particular to a sEMG gesture recognition method based on a wavelet width learning system.
Background
Gesture actions in body language play an important role in daily communications, such as referee actions on football stadium, etc. Therefore, scholars have focused on how to allow computers and machines to recognize gesture actions by human beings and execute corresponding programs with high efficiency and high precision. This will change the form of human-machine communication.
Currently, the recognition algorithms for gesture actions are mainly divided into the following types: a classification algorithm based on visual image recognition and a gesture recognition algorithm based on surface electromyographic signals (sEMG); the former processes images, and the method has high requirements on the image pickup equipment and high price of the equipment, so that the method is difficult to popularize, but the recognition rate has been researched into high recognition rate; while gesture recognition algorithms based on surface electromyographic signals (sEMG) have lower hardware requirements and are easy to implement than the former, the hardware for acquiring sEMG signals can be worn on the body and is not limited to a recognition product with a fixed camera position. The existing methods are not based on artificial intelligence algorithms at present to study how to classify signals.
In recent years, a great deal of research on gesture recognition algorithms is being carried out at home and abroad, including: a study of signal acquisition, a study of feature extraction and calculation, a study of signal filtering, a study of a feature classification model, and the like. Currently, the commonly used classification models are all based on classical machine learning algorithms, such as random forest networks, BP neural networks, convolutional neural networks, etc. However, these classical network models consume a significant amount of time in the training process.
Disclosure of Invention
The invention aims to overcome the defects and shortcomings of the prior art and provides a sEMG gesture recognition method based on a wavelet width learning system, which can complete training of a model and determination of parameters more quickly, so that the working efficiency is improved.
The aim of the invention is achieved by the following technical scheme:
a sEMG gesture recognition method based on a wavelet width learning system comprises the following steps:
step one, defining the number d of gesture action types according to corresponding recognition scenes, and distributing a serial number to each gesture;
step two, an electromyographic signal acquisition device is used for acquiring sEMG signals s;
step three, filtering and denoising the sEMG original signal according to the frequency characteristic of the sEMG signal, and removing noise outside the frequency band of 10 Hz-500 Hz by using a Butterworth filter at the same time:
Figure BDA0002104831580000021
where N is the order, ω of the filter c Filtering for a cut-off frequency;
detecting an active segment in the sEMG signal by adopting a moving window method;
step five, the detected movable section is processedExtracting row characteristics; calculating a characteristic of the signal segment, the characteristic comprising: average absolute value q 1 Root mean square q 2 Median frequency q 3 Mean frequency q 4 … … each feature of the tandem active segment is a feature vector:
x k =[q 1 ,q 2 ,...,q n ] T
wherein k is the kth active signal segment;
further, taking the characteristic vector of each active segment as an input vector x of the wavelet width learning classification system;
step six, designing input and output nodes of the wavelet width learning system according to the gesture number defined in the step one and the feature number in the step five, and inputting the feature vectors obtained in the step five into the wavelet width learning system for classification; updating parameters according to the labeled samples in the training process; in the testing and practical process, judging the sequence number of the node with the highest excitation degree from a plurality of nodes output by the obtained wavelet width learning system, wherein the sequence number of the node is the sequence number of the classification action;
and step seven, according to the sequence number corresponding to each gesture defined in the step one, the type of the gesture can be identified through the sequence number of the classification result.
Preferably, the moving window method in the fourth step specifically includes:
first, signal data is extracted for a short period of time and square integration is performed, as shown in the following formula:
Figure BDA0002104831580000031
where S (t) is electromyographic signal data within the window, S i Representing t i Integration of the time signal, i.e. the energy value at the corresponding time; setting a threshold value beta to the energy value S i Make a judgment when t i Time of day energy value S i Greater than threshold beta and moving consecutive n after the window 1 The secondary energy values are all greater than the threshold value, the time t is considered i Is the starting time of the action; herein the base isOn the basis, when from t j Time of day energy value S j Less than threshold beta and having a succession n 2 The secondary energy value is less than the threshold value and is regarded as t j The action signal segment is therefore:
Figure BDA0002104831580000032
where k is the action detected in the kth segment. .
Preferably, the medium-small wave width learning system in the step six is specifically:
(1) Mapping different sets of feature nodes using the input vectors:
Figure BDA0002104831580000041
wherein w is i ,a i And b i The mapping weight, the transfer parameter and the scaling parameter are respectively generated randomly in the initialization process and updated by using a k-mean algorithm; phi (phi) i (. Cndot.) is the ith wavelet basis function, where n is the total number of wavelet basis functions;
(2) Series connection of each group of characteristic nodes:
Z n =[Z 1 ,Z 2 ,......,Z n ];
(3) Utilizing characteristic node Z n Mapping incremental nodes:
H m =ξ(Z n W hh ),
wherein W is h And beta h The weight and the threshold value parameters of the increment node are respectively generated randomly during initialization, so that the parameters are fixed and do not need to be updated; ζ (·) is a general excitation function, where a sigmoid function can be used;
(4) In the training process, the output weight parameters are obtained by using a pseudo-inverse value and a ridge regression algorithm:
W all =[Z n |H m ] + Y,
wherein Y is the parameter of the training setOutput of examination, while [ Z ] n |H m ] + Is [ Z n |H m ]Is a pseudo-inverse of (2); in the testing and practical process, the output node can be directly mapped
Figure BDA0002104831580000042
Compared with the prior art, the invention has the following beneficial effects:
compared with the algorithm based on the traditional deep learning network, the method can complete training of the model and determination of parameters more quickly, so that the working efficiency is improved; nodes can be dynamically expanded to increase the recognition rate of the system without the need to fully re-build and train the model.
Drawings
FIG. 1 is a schematic flow chart of the present invention;
FIG. 2 is a schematic diagram of a wavelet width learning system according to the present invention;
fig. 3 is an myoelectric signal acquisition device (Myo-arm ring) of the present invention.
Detailed Description
The present invention will be described in further detail with reference to examples and drawings, but embodiments of the present invention are not limited thereto.
As shown in fig. 1 to 3, a sEMG gesture recognition method based on a wavelet width learning system includes the following steps:
step one, defining the number d of gesture action types according to corresponding recognition scenes, and distributing a serial number for each gesture.
Step two, an electromyographic signal acquisition device shown in fig. 3 is used for acquiring an sEMG signal s.
Step three, filtering and denoising the sEMG original signal according to the frequency characteristic of the sEMG signal, and removing noise outside the frequency band of 10 Hz-500 Hz by using a Butterworth filter at the same time:
Figure BDA0002104831580000051
where N is the order, ω of the filter c Is a cut-off frequency filter.
Detecting an active segment in the sEMG signal by adopting a moving window method;
the moving window method specifically comprises the following steps:
first, signal data is extracted for a short period of time and square integration is performed, as shown in the following formula:
Figure BDA0002104831580000061
where S (t) is electromyographic signal data within the window, S i Representing t i Integration of the time signal, i.e. the energy value at the corresponding time; setting a threshold value beta to the energy value S i Make a judgment when t i Time of day energy value S i Greater than threshold beta and moving consecutive n after the window 1 The secondary energy values are all greater than the threshold value, the time t is considered i Is the starting time of the action; on the basis, when from t j Time of day energy value S j Less than threshold beta and having a succession n 2 The secondary energy value is less than the threshold value and is regarded as t j The action signal segment is therefore:
Figure BDA0002104831580000062
where k is the action detected in the kth segment.
Fifthly, detecting the detected movable section signal section
Figure BDA0002104831580000063
Extracting features; calculating a characteristic of the signal segment, the characteristic comprising: average absolute value q 1 Root mean square q 2 Median frequency q 3 Mean frequency q 4 … … each feature of the tandem active segment is a feature vector:
x k =[q 1 ,q 2 ,...,q n ] T
wherein k is the kth active signal segment;
further, the feature vector of each active segment is taken as an input vector x of the wavelet width learning classification system.
Step six, designing input and output nodes of the wavelet width learning system according to the gesture number defined in the step one and the feature number in the step five, and inputting the feature vectors obtained in the step five into the wavelet width learning system for classification; updating parameters according to the labeled samples in the training process; in the testing and practical process, judging the sequence number of the node with the highest excitation degree from a plurality of nodes output by the obtained wavelet width learning system, wherein the sequence number of the node is the sequence number of the classification action;
the wavelet width learning system specifically comprises:
(1) Mapping different sets of feature nodes using the input vectors:
Figure BDA0002104831580000071
wherein w is i ,a i And b i The mapping weight, the transfer parameter and the scaling parameter are respectively generated randomly in the initialization process and updated by using a k-mean algorithm; phi (phi) i (. Cndot.) is the ith wavelet basis function, where n is the total number of wavelet basis functions;
(2) Series connection of each group of characteristic nodes:
Z n =[Z 1 ,Z 2 ,......,Z n ];
(3) Utilizing characteristic node Z n Mapping incremental nodes:
H m =ξ(Z n W hh ),
wherein W is h And beta h The weight and the threshold value parameters of the increment node are respectively generated randomly during initialization, so that the parameters are fixed and do not need to be updated; ζ (·) is a general excitation function, where a sigmoid function can be used;
(4) In the training process, the output weight parameters are obtained by using a pseudo-inverse value and a ridge regression algorithm:
W all =[Z n |H m ] + Y,
where Y is the reference output of the training set, and Z n |H m ] + Is [ Z n |H m ]Is a pseudo-inverse of (2); in the testing and practical process, the output node can be directly mapped
Figure BDA0002104831580000072
And step seven, according to the sequence number corresponding to each gesture defined in the step one, the type of the gesture can be identified through the sequence number of the classification result.
The method applies the wavelet width learning system to training and classifying in the gesture classification recognition algorithm, and has the advantages of combining the width learning system, being faster and more efficient compared with a deep learning network; the wavelet basis function is used as the excitation function of the feature layer, which results in an improved nonlinear fitting capability of the network.
Compared with the algorithm based on the traditional deep learning network, the method can complete training of the model and determination of parameters more quickly, so that the working efficiency is improved; nodes can be dynamically expanded to increase the recognition rate of the system without the need to fully re-build and train the model.
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof, but rather as various changes, modifications, substitutions, combinations, and simplifications which may be made therein without departing from the spirit and principles of the invention are intended to be included within the scope of the invention.

Claims (2)

1. The sEMG gesture recognition method based on the wavelet width learning system is characterized by comprising the following steps of:
step one, defining the number d of gesture action types according to corresponding recognition scenes, and distributing a serial number to each gesture;
step two, an electromyographic signal acquisition device is used for acquiring sEMG signals s;
step three, filtering and denoising the sEMG original signal according to the frequency characteristic of the sEMG signal, and removing noise outside the frequency band of 10 Hz-500 Hz by using a Butterworth filter at the same time:
Figure FDA0004133855900000011
where N is the order, ω of the filter c Filtering for a cut-off frequency;
detecting an active segment in the sEMG signal by adopting a moving window method;
step five, extracting the characteristics of the detected movable section; calculating a characteristic of the signal segment, the characteristic comprising: average absolute value q 1 Root mean square q 2 Median frequency q 3 Mean frequency q 4 ,……,q n The individual features of the tandem active segment are feature vectors:
x k =[q 1 ,q 2 ,...,q n ] T
wherein k is the kth active signal segment;
further, taking the characteristic vector of each active segment as an input vector x of the wavelet width learning classification system;
step six, designing input and output nodes of the wavelet width learning system according to the gesture number defined in the step one and the feature number in the step five, and inputting the feature vectors obtained in the step five into the wavelet width learning system for classification; updating parameters according to the labeled samples in the training process; in the testing and practical process, judging the sequence number of the node with the highest excitation degree from a plurality of nodes output by the obtained wavelet width learning system, wherein the sequence number of the node is the sequence number of the classification action;
step seven, according to the serial number corresponding to each gesture defined in the step one, the type of the gesture can be identified through the serial number of the classification result;
the medium-small wave width learning system in the step six specifically comprises the following steps:
(1) Mapping different sets of feature nodes using the input vectors:
Figure FDA0004133855900000021
wherein w is i ,a i And b i The mapping weight, the transfer parameter and the scaling parameter are respectively generated randomly in the initialization process and updated by using a k-mean algorithm; phi (phi) i (. Cndot.) is the ith wavelet basis function, where n is the total number of wavelet basis functions;
(2) Series connection of each group of characteristic nodes:
Z n =[Z 1 ,Z 2 ,......,Z n ];
(3) Utilizing characteristic node Z n Mapping incremental nodes:
H m =ξ(Z n W hh ),
wherein W is h And beta h The weight and the threshold value parameters of the increment node are respectively generated randomly during initialization, so that the parameters are fixed and do not need to be updated; ζ (·) is a general excitation function, where a sigmoid function can be used;
(4) In the training process, the output weight parameters are obtained by using a pseudo-inverse value and a ridge regression algorithm:
W all =[Z n |H m ] + Y,
where Y is the reference output of the training set, and Z n |H m ] + Is [ Z n |H m ]Is a pseudo-inverse of (2); in the testing and practical process, the output node can be directly mapped
Figure FDA0004133855900000022
2. The sEMG gesture recognition method based on the wavelet width learning system according to claim 1, wherein the moving window method in the fourth step specifically comprises:
first, signal data is extracted for a short period of time and square integration is performed, as shown in the following formula:
Figure FDA0004133855900000031
where S (t) is electromyographic signal data within the window, S i Representing t i Integration of the time signal, i.e. the energy value at the corresponding time; setting a threshold value beta to the energy value S i Make a judgment when t i Time of day energy value S i Greater than threshold beta and moving consecutive n after the window 1 The secondary energy values are all greater than the threshold value, the time t is considered i Is the starting time of the action; on the basis, when from t j Time of day energy value S j Less than threshold beta and having a succession n 2 The secondary energy value is less than the threshold value and is regarded as t j The action signal segment is therefore:
Figure FDA0004133855900000032
where k is the action detected in the kth segment.
CN201910548939.4A 2019-06-24 2019-06-24 sEMG gesture recognition method based on wavelet width learning system Active CN110389663B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910548939.4A CN110389663B (en) 2019-06-24 2019-06-24 sEMG gesture recognition method based on wavelet width learning system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910548939.4A CN110389663B (en) 2019-06-24 2019-06-24 sEMG gesture recognition method based on wavelet width learning system

Publications (2)

Publication Number Publication Date
CN110389663A CN110389663A (en) 2019-10-29
CN110389663B true CN110389663B (en) 2023-05-23

Family

ID=68285868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910548939.4A Active CN110389663B (en) 2019-06-24 2019-06-24 sEMG gesture recognition method based on wavelet width learning system

Country Status (1)

Country Link
CN (1) CN110389663B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110826625B (en) * 2019-11-06 2022-04-12 南昌大学 Finger gesture classification method based on surface electromyographic signals
CN111160392A (en) * 2019-12-03 2020-05-15 广东工业大学 Hyperspectral classification method based on wavelet width learning system
CN111695446B (en) * 2020-05-26 2021-07-27 浙江工业大学 Gesture recognition method integrating sEMG and AUS
CN113657479B (en) * 2021-08-12 2022-12-06 广东省人民医院 Novel multi-scale depth-width combined pathological picture classification method, system and medium
CN114098768B (en) * 2021-11-25 2024-05-03 哈尔滨工业大学 Cross-individual surface electromyographic signal gesture recognition method based on dynamic threshold and EasyTL

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106236336A (en) * 2016-08-15 2016-12-21 中国科学院重庆绿色智能技术研究院 A kind of myoelectric limb gesture and dynamics control method
CN109062401A (en) * 2018-07-11 2018-12-21 北京理工大学 A kind of real-time gesture identifying system based on electromyography signal
CN109077715A (en) * 2018-09-03 2018-12-25 北京工业大学 A kind of electrocardiosignal automatic classification method based on single lead
CN109308521A (en) * 2018-08-27 2019-02-05 广东工业大学 A kind of quaternary SerComm degree study filtering method for eliminating Physiological tremor

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102619981B1 (en) * 2016-02-02 2024-01-02 삼성전자주식회사 Gesture classification apparatus and method using electromyogram signals

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106236336A (en) * 2016-08-15 2016-12-21 中国科学院重庆绿色智能技术研究院 A kind of myoelectric limb gesture and dynamics control method
CN109062401A (en) * 2018-07-11 2018-12-21 北京理工大学 A kind of real-time gesture identifying system based on electromyography signal
CN109308521A (en) * 2018-08-27 2019-02-05 广东工业大学 A kind of quaternary SerComm degree study filtering method for eliminating Physiological tremor
CN109077715A (en) * 2018-09-03 2018-12-25 北京工业大学 A kind of electrocardiosignal automatic classification method based on single lead

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于小波特征分析的手指动作识别研究;李博等;《现代生物医学进展》;20111030(第20期);第3942-3945页 *

Also Published As

Publication number Publication date
CN110389663A (en) 2019-10-29

Similar Documents

Publication Publication Date Title
CN110389663B (en) sEMG gesture recognition method based on wavelet width learning system
CN113128552B (en) Electroencephalogram emotion recognition method based on depth separable causal graph convolution network
CN109859147A (en) A kind of true picture denoising method based on generation confrontation network noise modeling
CN103366180A (en) Cell image segmentation method based on automatic feature learning
CN102169690A (en) Voice signal recognition system and method based on surface myoelectric signal
CN101859377A (en) Electromyographic signal classification method based on multi-kernel support vector machine
CN110135365B (en) Robust target tracking method based on illusion countermeasure network
JP2018511870A (en) Big data processing method for segment-based two-stage deep learning model
CN103971329A (en) Cellular nerve network with genetic algorithm (GACNN)-based multisource image fusion method
CN112541415B (en) Brain muscle function network motion fatigue detection method based on symbol transfer entropy and graph theory
CN112651426A (en) Fault diagnosis method for rolling bearing of wind turbine generator
CN113116361A (en) Sleep staging method based on single-lead electroencephalogram
CN106097257A (en) A kind of image de-noising method and device
CN108921106A (en) A kind of face identification method based on capsule
CN113768474B (en) Anesthesia depth monitoring method and system based on graph convolution neural network
CN115859055A (en) Feature extraction method for multi-source heterogeneous big data in aircraft manufacturing process
CN104021563B (en) Method for segmenting noise image based on multi-objective fuzzy clustering and opposing learning
CN113876339A (en) Method for constructing sleep state electroencephalogram characteristic signal feature set
CN107392379B (en) Lorenz disturbance-based time series wind speed prediction method
Andronache et al. Automatic gesture recognition framework based on forearm emg activity
CN102327116B (en) Network system random resonance restoration method of cortex electroencephalographic signal
CN110399656B (en) Lower-loading waist-saving parameter design method based on fuzzy logic and neural network
CN113487519B (en) Image rain removing method based on artificial intelligence
CN110458049A (en) A kind of behavior measure and analysis method based on more visions
CN109657734A (en) It is a kind of can dynamic change machine learning model construction method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant