CN112132192A - Model training method and device, electronic equipment and storage medium - Google Patents

Model training method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112132192A
CN112132192A CN202010931325.7A CN202010931325A CN112132192A CN 112132192 A CN112132192 A CN 112132192A CN 202010931325 A CN202010931325 A CN 202010931325A CN 112132192 A CN112132192 A CN 112132192A
Authority
CN
China
Prior art keywords
training
classification model
model
data set
training data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010931325.7A
Other languages
Chinese (zh)
Inventor
田彦秀
韩久琦
姚秀军
桂晨光
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Haiyi Tongzhan Information Technology Co Ltd
Original Assignee
Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Haiyi Tongzhan Information Technology Co Ltd filed Critical Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority to CN202010931325.7A priority Critical patent/CN112132192A/en
Publication of CN112132192A publication Critical patent/CN112132192A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • G06F18/2414Smoothing the distance, e.g. radial basis function networks [RBFN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computational Linguistics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Evolutionary Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention relates to a model training method, a model training device, electronic equipment and a storage medium, wherein the model training method comprises the following steps: acquiring a training data set for retraining a preset classification model; inputting the training data set into the preset classification model, and retraining the preset classification model until the preset classification model is converged to obtain a new classification model; and the initial parameters of the preset classification model during retraining are model parameters obtained during the pre-training of the preset classification model. In the embodiment of the invention, the model parameters obtained when the preset classification model is trained in advance are used as the initial parameters when the preset classification model is retrained, and the initialization is not random.

Description

Model training method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a model training method and apparatus, an electronic device, and a storage medium.
Background
Electromyographic pattern recognition has shown great potential in multi-degree of freedom decoding of electromyographic intents, however, its application in commercial prostheses has been hampered by a lack of robustness to confounders. Studies have shown that, for a forearm electromyography recognition system, a classification error increases from about 5% to 20% and a lateral movement error increases to 40% per longitudinal movement of 1 cm, as well as a change in the position of the limb, a change in the impedance of the skin electrodes caused by dryness, humidity, etc., muscle fatigue, and a learning effect generated as the user gains experience of contracting his residual muscles, all of which reduce the accuracy of the estimation of the movement intention based on the surface electromyography signal.
If the model is retrained, the user may wait for a long time due to the slow shrinkage of the model, which is not beneficial for the user.
Disclosure of Invention
To solve the above technical problem or at least partially solve the above technical problem, the present application provides a model training method, apparatus, electronic device, and storage medium.
In a first aspect, the present application provides a model training method, including:
acquiring a training data set for retraining a preset classification model;
inputting the training data set into the preset classification model, and retraining the preset classification model until the preset classification model is converged to obtain a new classification model;
and the initial parameters of the preset classification model during retraining are model parameters obtained during the pre-training of the preset classification model.
Optionally, the obtaining a training data set for retraining the preset classification model includes:
acquiring a pre-training data set for pre-training the preset classification model;
and extracting training data from the pre-training data set to construct the training data set.
Optionally, the extracting training data from the pre-training data set to obtain the training data set includes:
acquiring surface electromyographic signals of a preset number of channels corresponding to each gesture action in at least two different gesture actions;
moving the acquired surface electromyographic signals of a preset number of channels corresponding to each gesture action in at least two different gesture actions to the left or right by half a channel position to obtain the training data;
and/or moving the acquisition position of the surface electromyographic signals of the preset number of channels corresponding to each gesture action upwards or downwards by a half channel position in at least two different gesture actions to obtain the training data.
Optionally, the pre-training the preset classification model includes:
acquiring a pre-training data set and a test data set;
initializing pre-training parameters of an original classification model randomly;
pre-training parameters are used as initial parameters when the original classification model is pre-trained;
inputting the pre-training data set into the original classification model, and pre-training the original classification model;
when the pre-training of the original classification model is completed, verifying the original classification model by using the test data set;
and if the original classification model is converged, obtaining the preset classification model.
Optionally, the method further includes:
acquiring surface electromyographic signals of a preset number of channels corresponding to each gesture action in at least two different gesture actions;
preprocessing the surface electromyographic signals corresponding to the at least two gesture actions to obtain electromyographic data corresponding to the at least two gesture actions;
processing the preprocessed myoelectric data corresponding to at least two gesture actions by using a sliding window algorithm to obtain training sample data corresponding to each gesture action;
and grouping the training sample data corresponding to each gesture action to obtain a pre-training data set and a testing data set.
Optionally, the preset classification model includes: the system comprises a cascaded input layer, a convolution pooling layer, a full connection layer, a softmax function layer and an output layer;
the convolution pooling layer comprises at least two convolution layers, the output of each convolution layer is connected with the input of one normalization layer, the output of each normalization layer is connected with the input of one ReLU layer, and the output of each ReLU layer is connected with the inputs of two cascaded maximum pooling layers.
Optionally, the convolution pooling layer includes 5 cascaded convolution layers, where the number of convolution kernels in the first convolution layer is 16, the number of convolution kernels in the second convolution layer is 32, the number of convolution kernels in the third convolution layer is 64, the number of convolution kernels in the fourth convolution layer is 64, the number of convolution kernels in the fifth convolution layer is 16, and the size of each convolution kernel is 3 × 3.
In a second aspect, the present application provides a model training apparatus comprising:
the acquisition module is used for acquiring a training data set for retraining the preset classification model;
the training module is used for inputting the training data set into the preset classification model, retraining the preset classification model and obtaining a new classification model until the preset classification model is converged;
and the initial parameters of the preset classification model during retraining are model parameters obtained during the pre-training of the preset classification model.
In a third aspect, the present application provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
a processor configured to implement the model training method according to any one of the first aspect when executing a program stored in the memory.
In a fourth aspect, the present application provides a computer-readable storage medium having stored thereon a program of a model training method, which when executed by a processor, implements the steps of the model training method of any one of the first aspects.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
in the embodiment of the invention, the model parameters obtained when the preset classification model is trained in advance are used as the initial parameters when the preset classification model is retrained, and the initialization is not random.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a flowchart of a model training method according to an embodiment of the present disclosure;
FIG. 2 is another flow chart of a model training method according to an embodiment of the present disclosure;
FIG. 3 is a block diagram of a model training apparatus according to an embodiment of the present disclosure;
fig. 4 is a structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The main method for solving the above problems is supervised adaptation, which is to recalibrate the system by short-time retraining, however, in order to be practical and avoid the inconvenience of the user, the calibration duration of the method must be minimized, but this may result in insufficient training data, and thus, the classifier may be insufficiently trained, resulting in lack of robustness. To this end, an embodiment of the present invention provides a model training method, an apparatus, an electronic device, and a storage medium, where the model training method may be applied in a terminal, such as a PC or a prosthesis, and as shown in fig. 1, the model training method may include the following steps:
step S101, acquiring a training data set for retraining a preset classification model;
in the embodiment of the present invention, the preset classification model refers to a classification model that is trained in advance, and for example, the preset classification model may be a convolutional neural network model, where the convolutional neural network model includes: the device comprises an input layer, a convolution layer, a pooling layer, a full-link layer and an output layer which are cascaded.
Since the electromyographic signals may change correspondingly within a few days, in order to prevent the electromyographic classification from degrading over time, in this step, a training data set may be obtained, and the pre-set classification model may be retrained with the training data set.
Step S102, inputting the training data set into the preset classification model, and retraining the preset classification model until the preset classification model is converged to obtain a new classification model;
in the embodiment of the present invention, the initial parameters of the retraining of the preset classification model are model parameters obtained when the preset classification model is trained in advance.
In the embodiment of the invention, the model parameters obtained when the preset classification model is trained in advance are used as the initial parameters when the preset classification model is retrained, and the initialization is not random.
At present, the supervised adaptive method in the prior art is to recalibrate the system by using short-time retraining, however, in order to be practical and avoid the inconvenience of users, the calibration duration of the method needs to be minimized, but the training data is insufficient, so that the situation that the classifier is not trained sufficiently occurs. In another embodiment of the present invention, the obtaining a training data set for retraining the preset classification model includes:
acquiring a pre-training data set for pre-training the preset classification model;
in this step, a pre-training data set used when pre-training the pre-set classification model may be obtained.
And extracting training data from the pre-training data set to construct the training data set.
In this step, part of the training data in the pre-training data set may be extracted, and the training data set may be constructed based on the extracted part of the training data.
The embodiment of the invention retrains the network by using the training data extracted from the pre-training data set, calibrates the network model by using part of the training data in the pre-training data set, and has stronger robustness to electrode displacement because part of the training data can correct parameters of the model part to adapt to the data of the classifier.
In another embodiment of the present invention, the extracting training data from the pre-training data set to obtain the training data set includes:
acquiring surface electromyographic signals of a preset number of channels corresponding to each gesture action in at least two different gesture actions;
moving the acquired surface electromyographic signals of a preset number of channels corresponding to each gesture action in at least two different gesture actions to the left or right by half a channel position to obtain the training data;
and/or moving the acquisition position of the surface electromyographic signals of the preset number of channels corresponding to each gesture action upwards or downwards by a half channel position in at least two different gesture actions to obtain the training data.
According to the embodiment of the invention, the acquisition positions of 5 channels can be uniformly moved leftwards or rightwards by half the channel position (the influence of transverse movement on the classifier is larger than that of longitudinal movement), the network model is calibrated by using the moved data (one fourth of the data amount of original training data), and the performance of the obtained classifier has stronger robustness on electrode displacement.
In another embodiment of the present invention, the pre-training the preset classification model, as shown in fig. 2, includes:
step S201, acquiring a pre-training data set and a testing data set;
in the step, surface electromyographic signals of different gestures can be preprocessed, and the preprocessing comprises filtering 50Hz power frequency interference, band-pass filtering and the like. And performing activity segment detection on the preprocessed data, performing sliding window processing on the action potential data of the gesture actions, and dividing S training sample data for each gesture action to serve as training data and test data of the convolutional neural network.
For example, in the application, collected surface electromyogram signals of 5 channels and 8 types of gesture motions are processed by a 50Hz wave trap, then 4-order 5-500 Hz Butterworth band-pass filtering is performed, the window length is 200 data points, 400 sample quantities are used as the input of a convolutional neural network for each type of gesture motion, and the sample quantities refer to the number of samples, namely how many surface electromyogram signals with the window length of 200 are input into the convolutional neural network.
Step S202, initializing pre-training parameters of an original classification model randomly;
step S203, using a pre-training parameter as an initial parameter when the original classification model is pre-trained;
step S204, inputting the pre-training data set into the original classification model, and pre-training the original classification model;
step S205, when the pre-training of the original classification model is completed, verifying the original classification model by using the test data set;
step S206, if the original classification model is converged, the preset classification model is obtained.
In yet another embodiment of the present invention, the method further comprises:
acquiring surface electromyographic signals of a preset number of channels corresponding to each gesture action in at least two different gesture actions;
preprocessing the surface electromyographic signals corresponding to the at least two gesture actions to obtain electromyographic data corresponding to the at least two gesture actions;
processing the preprocessed myoelectric data corresponding to at least two gesture actions by using a sliding window algorithm to obtain training sample data corresponding to each gesture action;
and grouping the training sample data corresponding to each gesture action to obtain a pre-training data set and a testing data set.
The embodiment of the invention carries out model training on the convolutional neural network based on the training data and the sample labels of the training data to obtain a pre-model, and tests the model classification accuracy by using the test data.
In another embodiment of the present invention, the preset classification model includes: the system comprises a cascaded input layer, a convolution pooling layer, a full connection layer, a softmax function layer and an output layer;
the convolution pooling layer comprises at least two convolution layers, the output of each convolution layer is connected with the input of one normalization layer, the output of each normalization layer is connected with the input of one ReLU layer, and the output of each ReLU layer is connected with the inputs of two cascaded maximum pooling layers.
An input layer: the method comprises the following steps of M channels, wherein the window length is window _ length (namely window _ length electromyographic data), the sample size is L, wherein L is k N, k is k types of gesture motions, each type of gesture motion has N samples as training, and M is window _ length L as the input of a network.
Full connection layer: adopting a softmax function after the full connection layer;
an output layer: and outputting the label of each gesture action.
The convolutional neural network structure in the embodiment of the invention is much simpler than a standard network architecture, the parameter adjustment of the convolutional neural network has the characteristics of self-learning and automatic modification, the learning capability is strong, and the convolutional neural network has obvious superiority compared with classifiers such as SVM, LDA and the like.
In yet another embodiment of the present invention, the convolution pooling layer includes 5 cascaded convolution layers, wherein the number of convolution kernels in the first convolution layer is 16, the number of convolution kernels in the second convolution layer is 32, the number of convolution kernels in the third convolution layer is 64, the number of convolution kernels in the fourth convolution layer is 64, the number of convolution kernels in the fifth convolution layer is 16, and the size of each convolution kernel is 3 x 3.
In another embodiment of the present invention, there is also provided a model training apparatus, as shown in fig. 3, including:
an obtaining module 11, configured to obtain a training data set for retraining a preset classification model;
the training module 12 is configured to input the training data set into the preset classification model, and retrain the preset classification model until the preset classification model converges to obtain a new classification model;
and the initial parameters of the preset classification model during retraining are model parameters obtained during the pre-training of the preset classification model.
In another embodiment of the present invention, an electronic device is further provided, which includes a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the model training method in the embodiment of the method when executing the program stored in the memory.
In the electronic device provided by the embodiment of the invention, the processor acquires the training data set for retraining the preset classification model by executing the program stored in the memory; inputting the training data set into the preset classification model, and retraining the preset classification model until the preset classification model is converged to obtain a new classification model; the initial parameters obtained when the preset classification model is retrained are the model parameters obtained when the preset classification model is retrained in advance, the model parameters obtained when the preset classification model is retrained in advance are used as the initial parameters when the preset classification model is retrained in advance, and random initialization is not performed.
The communication bus 1140 mentioned in the above electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus 1140 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 4, but this does not indicate only one bus or one type of bus.
The communication interface 1120 is used for communication between the electronic device and other devices.
The memory 1130 may include a Random Access Memory (RAM), and may also include a non-volatile memory (non-volatile memory), such as at least one disk memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The processor 1110 may be a general-purpose processor, and includes a Central Processing Unit (CPU), a Network Processor (NP), and the like; the integrated circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components.
In yet another embodiment of the present invention, a computer-readable storage medium is further provided, on which a program of a model training method is stored, which when executed by a processor implements the steps of the model training method described in the aforementioned method embodiment.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of model training, comprising:
acquiring a training data set for retraining a preset classification model;
inputting the training data set into the preset classification model, and retraining the preset classification model until the preset classification model is converged to obtain a new classification model;
and the initial parameters of the preset classification model during retraining are model parameters obtained during the pre-training of the preset classification model.
2. The model training method of claim 1, wherein the obtaining of the training data set for retraining the pre-set classification model comprises:
acquiring a pre-training data set for pre-training the preset classification model;
and extracting training data from the pre-training data set to construct the training data set.
3. The model training method of claim 2, wherein the extracting training data in the pre-training data set to obtain the training data set comprises:
acquiring surface electromyographic signals of a preset number of channels corresponding to each gesture action in at least two different gesture actions;
moving the acquired surface electromyographic signals of a preset number of channels corresponding to each gesture action in at least two different gesture actions to the left or right by half a channel position to obtain the training data;
and/or moving the acquisition position of the surface electromyographic signals of the preset number of channels corresponding to each gesture action upwards or downwards by a half channel position in at least two different gesture actions to obtain the training data.
4. The model training method of claim 1, wherein pre-training the pre-set classification model comprises:
acquiring a pre-training data set and a test data set;
initializing pre-training parameters of an original classification model randomly;
pre-training parameters are used as initial parameters when the original classification model is pre-trained;
inputting the pre-training data set into the original classification model, and pre-training the original classification model;
when the pre-training of the original classification model is completed, verifying the original classification model by using the test data set;
and if the original classification model is converged, obtaining the preset classification model.
5. The model training method of claim 4, further comprising:
acquiring surface electromyographic signals of a preset number of channels corresponding to each gesture action in at least two different gesture actions;
preprocessing the surface electromyographic signals corresponding to the at least two gesture actions to obtain electromyographic data corresponding to the at least two gesture actions;
processing the preprocessed myoelectric data corresponding to at least two gesture actions by using a sliding window algorithm to obtain training sample data corresponding to each gesture action;
and grouping the training sample data corresponding to each gesture action to obtain a pre-training data set and a testing data set.
6. The model training method of claim 5, wherein the preset classification model comprises: the system comprises a cascaded input layer, a convolution pooling layer, a full connection layer, a softmax function layer and an output layer;
the convolution pooling layer comprises at least two convolution layers, the output of each convolution layer is connected with the input of one normalization layer, the output of each normalization layer is connected with the input of one ReLU layer, and the output of each ReLU layer is connected with the inputs of two cascaded maximum pooling layers.
7. The model training method of claim 6, wherein the convolution pooling layer comprises 5 concatenated convolutional layers, wherein the number of convolutional kernels in the first convolutional layer is 16, the number of convolutional kernels in the second convolutional layer is 32, the number of convolutional kernels in the third convolutional layer is 64, the number of convolutional kernels in the fourth convolutional layer is 64, the number of convolutional kernels in the fifth convolutional layer is 16, and the size of each convolutional kernel is 3 x 3.
8. A model training apparatus, comprising:
the acquisition module is used for acquiring a training data set for retraining the preset classification model;
the training module is used for inputting the training data set into the preset classification model, retraining the preset classification model and obtaining a new classification model until the preset classification model is converged;
and the initial parameters of the preset classification model during retraining are model parameters obtained during the pre-training of the preset classification model.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
a processor for implementing the model training method according to any one of claims 1 to 7 when executing a program stored in a memory.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium has stored thereon a program of a model training method, which when executed by a processor implements the steps of the model training method of any one of claims 1 to 7.
CN202010931325.7A 2020-09-07 2020-09-07 Model training method and device, electronic equipment and storage medium Pending CN112132192A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010931325.7A CN112132192A (en) 2020-09-07 2020-09-07 Model training method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010931325.7A CN112132192A (en) 2020-09-07 2020-09-07 Model training method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112132192A true CN112132192A (en) 2020-12-25

Family

ID=73848138

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010931325.7A Pending CN112132192A (en) 2020-09-07 2020-09-07 Model training method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112132192A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111209675A (en) * 2020-01-10 2020-05-29 南方电网科学研究院有限责任公司 Simulation method and device of power electronic device, terminal equipment and storage medium
CN112826516A (en) * 2021-01-07 2021-05-25 京东数科海益信息科技有限公司 Electromyographic signal processing method, device, equipment, readable storage medium and product

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426842A (en) * 2015-11-19 2016-03-23 浙江大学 Support vector machine based surface electromyogram signal multi-hand action identification method
CN107590432A (en) * 2017-07-27 2018-01-16 北京联合大学 A kind of gesture identification method based on circulating three-dimensional convolutional neural networks
WO2019080203A1 (en) * 2017-10-25 2019-05-02 南京阿凡达机器人科技有限公司 Gesture recognition method and system for robot, and robot
CN109765996A (en) * 2018-11-23 2019-05-17 华东师范大学 Insensitive gesture detection system and method are deviated to wearing position based on FMG armband
US20190227627A1 (en) * 2018-01-25 2019-07-25 Ctrl-Labs Corporation Calibration techniques for handstate representation modeling using neuromuscular signals
CN110710970A (en) * 2019-09-17 2020-01-21 北京海益同展信息科技有限公司 Method and device for recognizing limb actions, computer equipment and storage medium
CN111209885A (en) * 2020-01-13 2020-05-29 腾讯科技(深圳)有限公司 Gesture information processing method and device, electronic equipment and storage medium
CN111367399A (en) * 2018-12-26 2020-07-03 中国科学院沈阳自动化研究所 Surface electromyographic signal gesture recognition method
CN111374808A (en) * 2020-03-05 2020-07-07 北京海益同展信息科技有限公司 Artificial limb control method and device, storage medium and electronic equipment
CN111401166A (en) * 2020-03-06 2020-07-10 中国科学技术大学 Robust gesture recognition method based on electromyographic information decoding

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105426842A (en) * 2015-11-19 2016-03-23 浙江大学 Support vector machine based surface electromyogram signal multi-hand action identification method
CN107590432A (en) * 2017-07-27 2018-01-16 北京联合大学 A kind of gesture identification method based on circulating three-dimensional convolutional neural networks
WO2019080203A1 (en) * 2017-10-25 2019-05-02 南京阿凡达机器人科技有限公司 Gesture recognition method and system for robot, and robot
US20190227627A1 (en) * 2018-01-25 2019-07-25 Ctrl-Labs Corporation Calibration techniques for handstate representation modeling using neuromuscular signals
CN109765996A (en) * 2018-11-23 2019-05-17 华东师范大学 Insensitive gesture detection system and method are deviated to wearing position based on FMG armband
CN111367399A (en) * 2018-12-26 2020-07-03 中国科学院沈阳自动化研究所 Surface electromyographic signal gesture recognition method
CN110710970A (en) * 2019-09-17 2020-01-21 北京海益同展信息科技有限公司 Method and device for recognizing limb actions, computer equipment and storage medium
CN111209885A (en) * 2020-01-13 2020-05-29 腾讯科技(深圳)有限公司 Gesture information processing method and device, electronic equipment and storage medium
CN111374808A (en) * 2020-03-05 2020-07-07 北京海益同展信息科技有限公司 Artificial limb control method and device, storage medium and electronic equipment
CN111401166A (en) * 2020-03-06 2020-07-10 中国科学技术大学 Robust gesture recognition method based on electromyographic information decoding

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111209675A (en) * 2020-01-10 2020-05-29 南方电网科学研究院有限责任公司 Simulation method and device of power electronic device, terminal equipment and storage medium
CN112826516A (en) * 2021-01-07 2021-05-25 京东数科海益信息科技有限公司 Electromyographic signal processing method, device, equipment, readable storage medium and product

Similar Documents

Publication Publication Date Title
CN106920545B (en) Speech feature extraction method and device based on artificial intelligence
WO2021143353A1 (en) Gesture information processing method and apparatus, electronic device, and storage medium
CN108764195B (en) Handwriting model training method, handwritten character recognition method, device, equipment and medium
CN111163690B (en) Arrhythmia detection method, arrhythmia detection device, electronic equipment and computer storage medium
CN112132192A (en) Model training method and device, electronic equipment and storage medium
CN109276255A (en) A kind of limb tremor detection method and device
WO2022012364A1 (en) Electromyographic signal processing method and apparatus, and exoskeleton robot control method and apparatus
CN110333783B (en) Irrelevant gesture processing method and system for robust electromyography control
CN111700718B (en) Method and device for recognizing holding gesture, artificial limb and readable storage medium
CN111603162A (en) Electromyographic signal processing method and device, intelligent wearable device and storage medium
Betthauser et al. Stable electromyographic sequence prediction during movement transitions using temporal convolutional networks
CN111383763B (en) Knee joint movement information processing method, device, equipment and storage medium
Suri et al. Transfer learning for semg-based hand gesture classification using deep learning in a master-slave architecture
CN114384999B (en) User-independent myoelectric gesture recognition system based on self-adaptive learning
Aceves-Fernández et al. Methodology proposal of EMG hand movement classification based on cross recurrence plots
Moin et al. Analysis of contraction effort level in EMG-based gesture recognition using hyperdimensional computing
CN106845348B (en) Gesture recognition method based on arm surface electromyographic signals
CN115981470A (en) Gesture recognition method and system based on feature joint coding
Xu et al. A novel concatenate feature fusion RCNN architecture for sEMG-based hand gesture recognition
Jo et al. Real-time hand gesture classification using crnn with scale average wavelet transform
CN112307996B (en) Fingertip electrocardio identity recognition device and method
Ison et al. Beyond user-specificity for emg decoding using multiresolution muscle synergy analysis
CN109886402A (en) Deep learning model training method, device, computer equipment and storage medium
CN109359622B (en) Electromyographic action recognition online updating algorithm based on Gaussian mixture model
Arozi et al. Electromyography (EMG) signal recognition using combined discrete wavelet transform based on artificial neural network (ANN)

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Beijing Economic and Technological Development Zone, Beijing 100176

Applicant before: BEIJING HAIYI TONGZHAN INFORMATION TECHNOLOGY Co.,Ltd.