CN111374808A - Artificial limb control method and device, storage medium and electronic equipment - Google Patents

Artificial limb control method and device, storage medium and electronic equipment Download PDF

Info

Publication number
CN111374808A
CN111374808A CN202010148338.7A CN202010148338A CN111374808A CN 111374808 A CN111374808 A CN 111374808A CN 202010148338 A CN202010148338 A CN 202010148338A CN 111374808 A CN111374808 A CN 111374808A
Authority
CN
China
Prior art keywords
data
training
finger
prediction model
channel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010148338.7A
Other languages
Chinese (zh)
Inventor
田彦秀
韩久琦
姚秀军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Haiyi Tongzhan Information Technology Co Ltd
Original Assignee
Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Haiyi Tongzhan Information Technology Co Ltd filed Critical Beijing Haiyi Tongzhan Information Technology Co Ltd
Priority to CN202010148338.7A priority Critical patent/CN111374808A/en
Publication of CN111374808A publication Critical patent/CN111374808A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/50Prostheses not implantable in the body
    • A61F2/68Operating or control means
    • A61F2/70Operating or control means electrical
    • A61F2/72Bioelectric control, e.g. myoelectric
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/50Prostheses not implantable in the body
    • A61F2/68Operating or control means
    • A61F2/70Operating or control means electrical
    • A61F2002/704Operating or control means electrical computer-controlled, e.g. robotic control

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Theoretical Computer Science (AREA)
  • General Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Physics & Mathematics (AREA)
  • Computing Systems (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • General Engineering & Computer Science (AREA)
  • Veterinary Medicine (AREA)
  • Public Health (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Cardiology (AREA)
  • Vascular Medicine (AREA)
  • Transplantation (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Fuzzy Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physiology (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Pathology (AREA)
  • Medical Informatics (AREA)
  • Surgery (AREA)
  • Prostheses (AREA)

Abstract

The disclosure provides an artificial limb control method, an artificial limb control device, a storage medium and electronic equipment, and relates to the technical field of computers. Wherein, the artificial limb control method comprises the following steps: acquiring electromyographic data of a relevant part corresponding to an abnormal part when the abnormal part simulation action is adopted through a plurality of channels, and determining a plurality of characteristics according to the electromyographic data; determining a target channel from the plurality of channels and a target feature from the plurality of features; training and decoding the electromyographic data based on the target channel and the target characteristics, obtaining a prediction result of each finger in motion simulation, and obtaining parameters of a prediction model corresponding to each finger; and if the parameter online test is passed, controlling the artificial limb of the abnormal part according to the prediction result. The technical scheme disclosed by the invention can improve the flexibility and accuracy of artificial limb control.

Description

Artificial limb control method and device, storage medium and electronic equipment
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a prosthetic limb control method, a prosthetic limb control device, a computer-readable storage medium, and an electronic device.
Background
With the development of the technology, users with limb abnormality (such as amputees) can utilize the electromyographic signals of the residual limb to realize the control of the prosthetic hand.
In the related art, there are two methods for performing prosthesis control: firstly, the opening and closing actions of the myoelectric artificial hand are respectively controlled by the residual extensor muscles, or the opening and closing actions of the myoelectric artificial hand are controlled by generating myoelectric according to a certain contraction mode of the muscles. And secondly, according to different muscle contraction modes, different limb actions are caused, and electromyographic signal mode classification is carried out to further control.
In the two modes, the control function of single degree of freedom is very low, and the flexibility, the dexterity and the usability are poor. The multi-degree-of-freedom control can only select a plurality of fixing actions with larger distinctiveness from the gesture action library, so that the number of gesture actions is limited, the movement of the artificial limb cannot be completely controlled, and the accuracy of the artificial limb control is poor.
It is to be noted that the information disclosed in the above background section is only for enhancement of understanding of the background of the present disclosure, and thus may include information that does not constitute prior art known to those of ordinary skill in the art.
Disclosure of Invention
The present disclosure provides a prosthesis control method and apparatus, a computer-readable storage medium, and an electronic device, so as to overcome the problems of low flexibility and poor accuracy in the related art at least to some extent.
Additional features and advantages of the disclosure will be set forth in the detailed description which follows, or in part will be obvious from the description, or may be learned by practice of the disclosure.
According to an aspect of the present disclosure, there is provided a prosthesis control method including: acquiring electromyographic data of a relevant part corresponding to an abnormal part when the abnormal part simulation action is adopted through a plurality of channels, and determining a plurality of characteristics according to the electromyographic data; determining a target channel from the plurality of channels and a target feature from the plurality of features; training and decoding the electromyographic data based on the target channel and the target characteristics, obtaining a prediction result of each finger in motion simulation, and obtaining parameters of a prediction model corresponding to each finger; and if the parameter online test is passed, controlling the artificial limb of the abnormal part according to the prediction result.
In an exemplary embodiment of the present disclosure, the determining a target channel from the plurality of channels includes: training a multilayer perceptron for each channel, and taking the channel with the highest performance as a first optimal channel; and pairing the first optimal channel with each of the rest channels, training a multilayer perceptron, and selecting a pair of channels with the highest performance as an optimal subset of the two channels until a preset channel condition is met after one channel is added.
In an exemplary embodiment of the present disclosure, the determining a target feature from the plurality of features includes: training a multilayer perceptron for each feature, and taking the feature with the highest performance as a first optimal feature; and matching the first optimal feature with each of the rest features, training a multilayer perceptron, and selecting a pair of features with the highest performance as an optimal subset of the two features until a preset feature condition is met after one feature is added.
In an exemplary embodiment of the present disclosure, the training and decoding the electromyography data based on the target channel and the target feature to obtain a prediction result of each finger in a simulated motion includes: screening the electromyographic data based on the target channel and the target feature; inputting the screened electromyographic data into a multilayer perceptron to train to obtain the prediction model, and testing the prediction model to obtain the prediction result of each finger.
In an exemplary embodiment of the disclosure, the inputting the screened electromyographic data into a multilayer perceptron for training to obtain the prediction model includes: acquiring myoelectric data, and determining training data from the myoelectric data; and training the multilayer perceptron according to the training data, and adjusting parameters of the multilayer perceptron based on a comparison result of a prediction result and a labeling result of the training data to obtain the trained multilayer perceptron as the prediction model.
In an exemplary embodiment of the disclosure, after obtaining the prediction model, the method further comprises: and determining test data from the electromyographic data, and testing the prediction model based on the test data so as to process the prediction model according to a test result.
In an exemplary embodiment of the present disclosure, if the parameter test passes, controlling the prosthesis of the abnormal portion according to the prediction result includes: according to the prediction result of each finger, performing an action test on the artificial limb to obtain an action test result so as to determine whether the parameter online test passes or not according to the action test result; and if the parameter online test is passed, controlling the artificial limb according to the parameter of the prediction model corresponding to the prediction result.
According to one aspect of the present disclosure, there is provided a prosthesis control device comprising: the data acquisition module is used for acquiring myoelectric data of a relevant part corresponding to an abnormal part when abnormal part simulation action is adopted through a plurality of channels and determining a plurality of characteristics according to the myoelectric data; a screening module for determining a target channel from the plurality of channels and a target feature from the plurality of features; the prediction result determining module is used for training and decoding the electromyographic data based on the target channel and the target characteristics, obtaining a prediction result of each finger in motion simulation, and obtaining parameters of a prediction model corresponding to each finger; and the control module is used for controlling the artificial limb of the abnormal part according to the prediction result if the parameter online test passes.
According to an aspect of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, implements a prosthesis control method as in any one of the above.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; and a memory for storing executable instructions of the processor; wherein the processor is configured to perform any of the prosthesis control methods described above via execution of the executable instructions.
According to the prosthesis control method, the prosthesis control device, the computer-readable storage medium and the electronic device provided in the embodiments of the present disclosure, myoelectric data of a user can be collected through a plurality of channels in a process of using an abnormal part by the user to simulate an action, a prediction result of each finger in the process of simulating the action and a parameter of a prediction model corresponding to each finger are obtained by performing prediction processing according to the myoelectric data, and a prosthesis of the abnormal part is further controlled according to the prediction result of each finger when the parameter online test is passed. On one hand, the myoelectric data can be trained and decoded, so that the control of multiple degrees of freedom of the artificial limb is realized according to the prediction result of each finger, the control flexibility and the control function diversity of the artificial limb are improved, and the usability is improved. On the other hand, the prediction result of each finger can be determined, so that the artificial limb can perform various actions by combining each finger, the limitation that only fixed actions can be performed is avoided, the number of the actions which can be performed is increased, the movement of the artificial limb can be completely controlled, and the accuracy of controlling the artificial limb is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and together with the description, serve to explain the principles of the disclosure. It is apparent that the drawings in the following description are only some embodiments of the present disclosure, and that other drawings can be obtained from those drawings without inventive effort for a person skilled in the art.
Fig. 1 schematically illustrates a system architecture diagram for implementing a prosthesis control method according to an embodiment of the present disclosure.
Fig. 2 schematically illustrates a schematic diagram of a prosthesis control method in an embodiment of the present disclosure.
Fig. 3 schematically shows a specific flow chart of the control of the prosthesis according to the embodiment of the disclosure.
Fig. 4 schematically illustrates a block diagram of a prosthetic control device in an embodiment of the present disclosure.
Fig. 5 schematically illustrates a block diagram of an electronic device in an embodiment of the disclosure.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. Example embodiments may, however, be embodied in many different forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of example embodiments to those skilled in the art. The described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the disclosure. One skilled in the relevant art will recognize, however, that the subject matter of the present disclosure can be practiced without one or more of the specific details, or with other methods, components, devices, steps, and the like. In other instances, well-known technical solutions have not been shown or described in detail to avoid obscuring aspects of the present disclosure.
Furthermore, the drawings are merely schematic illustrations of the present disclosure and are not necessarily drawn to scale. The same reference numerals in the drawings denote the same or similar parts, and thus their repetitive description will be omitted. Some of the block diagrams shown in the figures are functional entities and do not necessarily correspond to physically or logically separate entities. These functional entities may be implemented in the form of software, or in one or more hardware modules or integrated circuits, or in different networks and/or processor devices and/or microcontroller devices.
Fig. 1 shows a schematic diagram of an exemplary system architecture to which the technical solutions of the embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include a first end 101, a network 102, and a second end 103. The first end 101 may be a prosthesis, such as a prosthetic hand. The network 102 is used as a medium for providing a communication link between the first end 101 and the second end 103, the network 102 may include various connection types, such as a wired communication link, a wireless communication link, and the like, and in the embodiment of the present disclosure, the network 102 between the first end 101 and the second end 103 may be a wired communication link, such as a communication link provided by a serial connection line, or a wireless communication link, such as a communication link provided by a wireless network. The second terminal 103 may be a server or a client with a data processing function, for example, a terminal device with a data processing function, such as a portable computer, a desktop computer, a smart phone, etc., for processing the electromyographic data.
It should be understood that the number of first ends, networks and second ends in fig. 1 is merely illustrative.
It should be noted that the prosthesis control method provided by the embodiments of the present disclosure may be performed entirely by the second end, and accordingly, the prosthesis control device may be disposed in the second end 103.
Based on the system architecture, the embodiment of the disclosure provides a prosthesis control method. Referring to fig. 2, the prosthesis control method includes steps S210 to S240, which are described in detail as follows:
in step S210, acquiring myoelectric data of a relevant portion corresponding to an abnormal portion when an abnormal portion simulation action is adopted through a plurality of channels, and determining a plurality of characteristics according to the myoelectric data;
in step S220, determining a target channel from the plurality of channels and a target feature from the plurality of features;
in step S230, training and decoding the electromyographic data based on the target channel and the target feature, obtaining a prediction result of each finger during a simulated motion, and obtaining a parameter of a prediction model corresponding to each finger;
in step S240, if the parameter online test passes, the prosthesis of the abnormal portion is controlled according to the prediction result.
In the prosthesis control method provided by the embodiment of the disclosure, on one hand, since the myoelectric data can be trained and decoded, when the parameter online test of the prediction model passes, the control of multiple degrees of freedom of the prosthesis at the abnormal part is realized according to the prediction result of each finger, so that the flexibility and the control function of the prosthesis control are increased, and the usability is improved. On the other hand, the prediction result of each finger can be determined, so that the artificial limb can perform various actions by combining each finger, the limitation that only fixed actions can be performed is avoided, the number of the actions which can be performed is increased, the movement of the artificial limb can be completely controlled, and the accuracy of controlling the artificial limb is improved.
Next, the prosthesis control method in the embodiment of the present disclosure will be further explained with reference to the drawings.
Referring to fig. 2, in step S210, electromyogram data of a relevant portion corresponding to an abnormal portion when an abnormal portion simulation motion is employed is acquired through a plurality of channels, and a plurality of features are determined according to the electromyogram data.
In the embodiment of the present disclosure, the hand may be first divided into a plurality of degrees of freedom, specifically, 6 degrees of freedom. The 6 degrees of freedom may specifically include: the little finger, ring finger, middle finger and index finger each have 1 degree of freedom (extension or flexion), and the thumb has 2 degrees of freedom (extension or flexion, abduction or internal rotation). In the embodiment of the disclosure, the myoelectric data can be acquired so as to facilitate the control of the prosthetic hand according to the myoelectric data. Before acquiring the electromyographic data, electrodes may be first placed so as to acquire the electromyographic data through the electrodes. Specifically, because the cause of the abnormal part, the remaining muscles and the position of the abnormal part of each user are different, the position of the electrode where the muscle tension is controlled can be determined by performing the stump palpation on each user, and thus, the electrodes can be determined to be required to be placed at the extensor digitorum indicum, flexor carpi radialis, longibrachial palmaris, flexor digitorum superficialis and flexor carpio ulnaris. The number of electrodes placed can be 5-7, allowing full freedom of control for the user with as few channels available as possible. Wherein, the channel refers to a channel for electromyographic signal acquisition. The number of channels may be determined by the number of electrodes, and the number of electrodes may be different for different devices.
The abnormality may be an amputation site, for example, an amputation hand. The user may simulate an action using the anomaly, where the action may be an action performed according to a mirrored action. The mirror action refers to playing an example action through software on a screen of the terminal device so as to be simulated by a user according to the example action, and the example action is completed under the guidance of the example action. It is noted that the prosthetic hand is not installed or worn while the user mimics this example action.
In the embodiment of the present disclosure, in order to implement comprehensiveness of the actions, the mirror action and the action simulated by using the abnormal portion may include, but are not limited to: thumb flexion, index finger flexion, middle finger flexion, ring finger flexion, little finger flexion, palm extension, fist grasping, three-finger pinching, and thumb pronation. Since there may be muscular atrophy at the abnormal portion and electromyographic signals may not be collected, it is necessary to perform a simulation operation according to a mirror image operation for data collection.
In the process of simulating the action, the collected myoelectric data can be synchronously recorded. In the embodiment of the disclosure, in order to improve the accuracy of the electromyographic data, the electromyographic data may be collected in a predictive model training stage and a predictive model online testing stage respectively. The prediction model may be any machine learning model that can be used to predict the bending type of the finger, for example, the prediction model may be a multi-layer perceptron or a convolutional neural network, and the prediction model is exemplified as the multi-layer perceptron in the embodiment of the present disclosure. Myoelectric data of a site associated with the amputation site may be collected from a plurality of channels while a user takes an example action of the amputation site simulation by electrodes placed at the respective positions described above. The associated site may be an amputation site, i.e. a stump site of an abnormal site, such as muscles of the surrounding remaining sites. Specifically, in the prediction model training phase, the sampling rate of electromyographic data is 2000Hz, each action is repeated 3 times, and each execution action is kept for 5 seconds. In the off-line training of the multi-layer perceptron, the following features are calculated by taking the window length of 100ms and stepping by 50ms, and the features can include but are not limited to: absolute Mean (MAV), zero crossing number (ZC), slope sign change times (SSC), Waveform Length (WL), Log detection, Root Mean Square (RMS), Willison amplitude. When the multi-layer sensor is tested on line, the electromyographic data is down-sampled to 30Hz, and other parameters can be kept unchanged. By reducing the sampling rate of the electromyographic data in the testing stage, the system consumption can be reduced, the resources are saved, the calculation amount is reduced, and the data processing efficiency is improved.
In step S220, a target channel is determined from the plurality of channels, and a target feature is determined from the plurality of features.
In the embodiment of the present disclosure, in order to improve accuracy of a prediction result of electromyographic data, the electromyographic data may be first screened before being processed by a prediction model. In particular, channels and features may be screened separately. When the channels are screened, the screening can be performed according to the training and testing results of all the channels on the prediction model. Specifically, the process of acquiring the target channel may include the following steps: training a prediction model for each channel, and taking the channel with the highest performance as a first optimal channel; and pairing the first optimal channel with each of the rest channels, training a prediction model, and selecting a pair of channels with the highest performance as an optimal subset of the two channels until a preset channel condition is met after one channel is added.
The prediction model can be a Multilayer Perceptron (MLP), also called artificial neural network, which has a plurality of hidden layers in between besides an input layer and an output layer, and the simplest Multilayer Perceptron has only one hidden layer, namely a three-layer structure. The multiple layers of perceptrons are fully connected. The bottom layer of the multilayer perceptron is an input layer, the middle layer is a hidden layer, and the last layer is an output layer. For each channel, a multi-layer perceptron may be trained in the same manner. Specifically, training data corresponding to each channel may be input to the multilayer perceptron for training to obtain a prediction result of the training data; if the prediction result is consistent with the labeling result of the training data, determining the model as a trained multilayer perceptron; if not, adjusting the weight parameters of the multi-layer trainer until the two are consistent to obtain the trained multi-layer trainer. After the multi-layer trainers are trained, the trained multi-layer trainers corresponding to each channel can be tested through the test data so as to determine the accuracy of the trained multi-layer perceptron and determine the performance according to the accuracy. Wherein the higher the accuracy, the higher the performance. The training data may be 70% of the collected electromyographic data, and the test data may be 30% of the collected electromyographic data. Further, the plurality of trained multilayer perceptrons may be ranked according to the ranking order of accuracy, so that the channel corresponding to the trained multilayer perceptrons with the highest accuracy is used as the first optimal channel. Next, the first optimal channel may be paired with each of the remaining channels, a multi-layer perceptron may be trained, and a pair of channels with the highest performance (highest accuracy) may be selected as an optimal subset of the two channels until a preset channel condition is met after adding one channel. The preset channel condition may be that the increase of the coefficient is less than 0.01 or that the number of channels is reached. The coefficient of determination represents a numerical characteristic of the relationship between a random variable and a plurality of random variables, is used for reflecting a statistical index of the reliability degree of the regression model for describing the variation of the dependent variable, and can be defined as the ratio of the variation of the independent variable described by all the independent variables in the model to the total variation of the independent variable. If the channel is detected to meet the preset channel condition, the channel meeting the condition can be taken as the target channel. Thus, the target channel may be some or all of the channels, as determined by the training and testing results. The multi-layer perceptron is trained and tested through the channels, the most accurate channel can be screened from the channels, accurate screening of data is achieved, and the calculation amount can be reduced.
In addition, the features can be screened, and particularly, the screening can be performed according to the training and testing results of all the features on the multilayer perceptron. Specifically, a prediction model is trained for each feature, and the feature with the highest performance is used as a first optimal feature; and matching the first optimal feature with each of the rest features, training a prediction model, and selecting a pair of features with the highest performance as an optimal subset of the two features until a preset feature condition is met after one feature is added.
For each feature, a multi-layered perceptron may be trained in the same manner. Specifically, training data corresponding to each feature may be input to the multi-layer perceptron for training to obtain a prediction result of the training data; and determining the trained multi-layer perceptron according to the prediction result of the training data and the labeling result of the training data. After the multi-layer trainers are trained, the trained multi-layer trainers corresponding to each feature can be tested through test data to determine the accuracy of the trained multi-layer perceptron and determine the performance according to the accuracy. Further, the trained multi-layer perceptrons can be ranked according to accuracy, so that the feature corresponding to the trained multi-layer perceptrons with the highest accuracy is used as the first optimal feature. Next, the first optimal feature and each of the remaining features may be paired, a multi-layer perceptron may be trained, and a pair of features with the highest accuracy may be selected as an optimal subset of the two features until a preset feature condition is satisfied after one feature is added. The preset feature condition may be that the increase of the coefficient is less than 0.01 or that the number of features is reached. If the detected feature satisfies the preset feature condition, the feature satisfying the feature condition may be taken as the target feature. Thus, the target feature may be some or all of the features, determined based on training and testing results. The multi-layer perceptron is trained and tested through the characteristics, the most accurate characteristics can be screened from the characteristics, accurate screening of data is achieved, and the calculated amount can be reduced.
Continuing to refer to fig. 2, in step S230, the electromyographic data is trained and decoded based on the target channel and the target feature, a prediction result of each finger during the simulated motion is obtained, and a parameter of a prediction model corresponding to each finger is obtained.
In the embodiment of the disclosure, after the target channel and the target feature are determined, the electromyographic data can be screened according to the target channel and the target feature, so that not all features are included in the multilayer perceptron. Specifically, the screened electromyographic data can be input into a trained multi-layer sensor, so that the prediction result of each finger in the simulated motion can be obtained through the trained multi-layer sensor.
In order to ensure the accuracy of the prediction result determined by the multilayer perceptron, the multilayer perceptron can be trained firstly, and the trained multilayer perceptron is obtained as a prediction model, so that the accuracy and the stability of the prediction model are improved. The training process of the multi-layer perceptron may include the steps of: acquiring myoelectric data, and determining training data from the myoelectric data; and training the multilayer perceptron according to the training data, and adjusting parameters of the multilayer perceptron based on a comparison result of a prediction result and a labeling result of the training data to obtain the trained multilayer perceptron as a prediction model. The training data may be 70% of electromyography data, where the training data refers to a large amount of electromyography data for which a prediction result of a finger has been obtained, and may be a plurality of electromyography data of the same user. The prediction result of the training data refers to the degree of bending of each finger obtained by inputting the training data into the trained multi-layer perceptron. The labeling result of the training data refers to the actual bending degree of each finger, and can be determined by the bending degree of each finger in the example action. The comparison result refers to whether the prediction result is consistent with the labeling result or not and the difference between the prediction result and the labeling result.
Specifically, the multilayer perceptron in the disclosed embodiments may include three layers of networks: an input layer, an implied layer containing 3 neurons and an output layer as a decoder for finger movements. Training data in the electromyographic data after being selected by the channels and the characteristics are input into an input layer of the multilayer sensor, and the number of the nodes depends on the number of the channels of the input electromyographic data. The 3 neurons in the hidden layer all use hyperbolic tangent activation functions, the output layer is decoded output and only comprises one output parameter, and the output parameter can be used for representing a prediction result for each finger in a simulation action process. The prediction result can be used for representing the bending degree of each finger, and specifically can be represented by two types of extension or flexion.
In the multi-layer perceptron, all nodes of the previous layer are connected to each node of the next layer by weight values. In the training process, starting from the initial weight parameter, if the prediction result is inconsistent with the labeling result and has a large difference in each iteration process, the weight parameter is continuously adjusted until the prediction result is consistent with the labeling result. That is, the training of the entire multi-layered perceptron can be done by minimizing the sum-of-squares error function. Wherein the error function can be expressed by equation (1):
Figure BDA0002401544410000101
wherein x isnN is 1, …, N is myoelectric data corresponding to the input characteristic, y (x)nW) is the output prediction result of the multi-layer perceptron, w is the weight array of the neuron, tnThe actual labeling result.
It should be noted that, since the Levenberg-Marquardt method has faster convergence time and higher calculation efficiency than the typical gradient descent method, the Levenberg-Marquardt method may be selected to fit the network weight values in the embodiment of the present disclosure. The Levenberg-Marquardt method is an estimation method of regression parameter least square estimation in nonlinear regression. This method is a method in which the steepest descent method and the linearization method are combined. Because the steepest descent method is suitable for the condition that the parameter estimation value is far from the optimal value in the initial stage of iteration, and the linearization method, namely the gauss-newton method is suitable for the later stage of iteration, the parameter estimation value is close to the range of the optimal value. The two methods combine to find the optimum value faster. In addition, it is within the scope of the embodiments of the present disclosure to use other methods to determine the weight values of the network.
After obtaining the trained multi-layer perceptron, the method may further comprise: and determining test data from the electromyographic data, and testing the trained multilayer perceptron based on the test data so as to process the trained multilayer perceptron according to a test result. Specifically, other data except the training data in the electromyography data may be used as the test data, for example, 30% of the electromyography data may be used as the test data, so as to facilitate performance evaluation of the trained multi-layered sensor. In the process of testing the multilayer perceptron, a 10-time cross validation algorithm can be adopted to test the accuracy of the multilayer perceptron algorithm. The specific process comprises the following steps: the data set (myoelectric data) was divided into 10 parts, and 9 parts of the data set were used as training data and 1 part of the data set was used as test data in turn, and the test was performed. Each trial will yield a corresponding accuracy (or error rate). The average of the accuracy (or error rate) of the 10 results is used as an estimate of the accuracy of the algorithm, and generally 10-fold cross validation is performed multiple times (for example, 10 times of 10-fold cross validation), and then the average is obtained as an estimate of the accuracy of the algorithm. The accuracy of the prediction model is tested through a 10-time cross validation algorithm, limited numbers can be learned through a large number of tests performed from different angles by using a large amount of data, local extreme values are avoided, the most reasonable error can be obtained, and the accuracy is improved. If the accuracy of the test on the trained multi-layer perceptron does not meet the requirement (for example, is smaller than the accuracy threshold), the parameters of the multi-layer perceptron can be updated again before the prediction model is used, so that the trained multi-layer perceptron with the accuracy meeting the requirement is obtained, and the accuracy and the reliability of the prediction model are improved. If the accuracy of the test on the trained multi-layer perceptron meets the requirement (for example, is greater than or equal to the accuracy threshold), the parameters of the trained multi-layer perceptron are kept unchanged to be used as the final multi-layer perceptron. Of course, K-fold cross-validation may also be used, where K may be any suitable value.
Meanwhile, parameters of the prediction model corresponding to each finger in the motion simulation can be acquired. The prediction model herein may be a multi-layer perceptron, and thus the parameters of the prediction model may be weight parameters of the multi-layer perceptron. When the trained multilayer perceptron is used for acquiring the prediction result of each finger corresponding to the abnormal part when the user simulates the action through the abnormal part, the prediction result for expressing the extension or flexion type of each finger and the parameters of the prediction model corresponding to the prediction result can be configured or loaded on the artificial limb, and the action of the artificial limb at the abnormal part can be tested according to the parameters. Specifically, the predicted result of each finger can be determined by a plurality of users with abnormal parts (at the moment, artificial limbs are installed on the abnormal parts of the users) and normal users through the trained multilayer perceptron. Further, the prediction results for each finger and the parameters of the prediction model may be sent to the prosthesis to facilitate performing the action by the prosthesis. The corresponding artificial limb can be controlled to realize the action of each finger through the parameters of the trained multilayer perceptron, and various actions obtained by combining a plurality of fingers can also be realized. In particular, the predictive model parameters may be finalized based on the online test results. The online test result can be specifically determined by an action test, and the result of the parameter test can be represented by an action test result.
It should be noted that, for each user, the electromyographic data of the user itself may be used to train the prediction model corresponding to the user itself, and the prediction model corresponding to different users may be different. In the embodiment of the disclosure, the prediction model corresponding to each user is trained through the electromyogram data of each user to obtain the prediction result of each finger of each user, so that the requirement of individual differentiation among different users can be met, and the accuracy of the prediction model corresponding to each user and the accuracy of the prediction result of each finger are improved.
In step S240, if the parameter online test passes, the prosthesis of the abnormal portion is controlled according to the prediction result.
In the embodiment of the disclosure, the prosthesis may be a prosthetic hand, and may be specifically installed at an abnormal position of the same user. The online test refers to the actual use test of the parameters of the prediction model, and can be realized by an action test. The action test refers to the test of the finger flexibility through the action of a preset type. The preset type of action may be an action for indicating the sensitivity of the prosthesis, which may for example comprise a grabbing action. If the action test result of the preset type of action is passed, the parameter online test can be determined to be passed, and the artificial limb is continuously controlled according to the parameters of the trained multilayer perceptron corresponding to the prediction result. If the action test result of the preset type of action is failed, the parameter online test can be determined to be failed, and the prediction model training stage needs to be returned again to readjust and determine the parameters of the multilayer perceptron until the action test result of the preset type of action is passed. In the embodiment of the present disclosure, if the preset type of action is completed, the action test result of the preset type of action may be considered to be passed; if the preset type of action is not completed, the action test result of the preset type of action can be considered as failed.
For example, the predetermined type of action may be the grasping of an object, the object may be a water bottle, and the like. Inputting the result into a trained multilayer sensor through channel and feature selection, and sending the prediction result of each finger to the artificial hand for object grabbing test. The results show that whether moving with a single finger or the same time with other fingers, the multilayer sensor successfully predicted the flexion and extension of each finger, and the prosthetic hand successfully grabbed the water bottle and then put it down, thus verifying the feasibility of this approach.
If the action test result indicates that the parameter online test passes, the parameters of the multilayer perceptron corresponding to each finger in the action simulation can be used for controlling the artificial limb of the abnormal part. The parameters are tested on line, and then the prediction model parameters corresponding to the multilayer perceptron are adopted to control the artificial limb at the abnormal part when the parameters are tested on line, so that the control of the artificial limb can meet the training condition, the accuracy of artificial limb control is improved, each finger can be controlled from multiple degrees of freedom, and the flexibility of the artificial limb is improved.
If the action test result indicates that the parameter online test fails, the parameters need to be readjusted until the action test result passes, so that the parameters of the corresponding prediction model when the parameter online test passes are used for controlling the artificial limb at the abnormal part, and the accuracy of artificial limb control is ensured. And the motion and the movement of the artificial limb are controlled according to the prediction result of each finger predicted by the multilayer sensor, so that the flexibility of the artificial limb is improved.
In the embodiment of the disclosure, the bending and stretching of a single finger are decoded by a multi-layer sensor according to the surface electromyography data of the residual muscle of the associated part corresponding to the abnormal part of the amputee. The decoding mode allows a user to control the motion of each finger simultaneously and continuously, and controls the artificial limb to perform actions under the condition of needing higher flexibility, so that the usability of the artificial limb can be greatly improved, and the flexibility of the artificial limb can be improved. The range of application is increased, the limitation is avoided, the limitation that only a fixed number of actions can be executed is avoided, the movement of the artificial limb can be completely controlled, and the accuracy of artificial limb control is improved. And the action execution of the artificial limb can be kept by using less muscle force energy, so that muscle fatigue caused by long-time use is prevented, and convenience is provided for users.
A specific flow chart for controlling a prosthesis is schematically shown in fig. 3, and referring to fig. 3, the method may specifically include:
in step S310, myoelectric electrodes, the number of which may be 5-7, are placed.
In step S320, the user synchronously records electromyogram data according to the mirroring motion imitating motion.
In step S330, a sliding window calculates a plurality of features. The plurality of features may include, but are not limited to, absolute Mean (MAV), zero crossing number (ZC), slope sign change number (SSC), Waveform Length (WL), Log detection, Root Mean Square (RMS), Willison amplitude.
In step S340, a plurality of channels are screened to perform channel selection.
In step S350, feature selection is performed on the plurality of features.
In step S360, the electromyographic data after the channel selection and the feature selection is input to the trained multi-layered perceptron for prediction.
In step S370, the degree of freedom, i.e., the degree of curvature, of each finger is determined by the trained multi-layer perceptron.
Through the technical scheme in fig. 3, the flexibility of the prosthetic hand is improved by decoding the motion of the single finger through the trained multilayer sensor, and the user is allowed to control the motion of the single finger or move in cooperation with other fingers, so that the usability of the myoelectric prosthetic hand can be greatly improved.
In an embodiment of the present disclosure, there is also provided a prosthesis control device, as shown in fig. 4, the prosthesis control device 400 mainly includes the following modules:
the data acquisition module 401 is configured to acquire electromyographic data of a relevant portion corresponding to an abnormal portion when an abnormal portion simulation action is performed through a plurality of channels, and determine a plurality of characteristics according to the electromyographic data;
a screening module 402 for determining a target channel from the plurality of channels and a target feature from the plurality of features;
a parameter determining module 403, configured to train and decode the electromyographic data based on the target channel and the target feature, obtain a prediction result of each finger during a simulated motion, and obtain a parameter of a prediction model corresponding to each finger;
and the control module 404 is configured to control the artificial limb at the abnormal part according to the prediction result if the parameter online test passes.
In an exemplary embodiment of the present disclosure, the screening module includes: the channel training module is used for training a prediction model for each channel and taking the channel with the highest performance as a first optimal channel; and the channel selection module is used for pairing the first optimal channel with each of the rest channels, training a prediction model, and selecting a pair of channels with the highest performance as an optimal subset of the two channels until a preset channel condition is met after one channel is added.
In an exemplary embodiment of the present disclosure, the screening module includes: the characteristic training module is used for training a prediction model for each characteristic and taking the characteristic with the highest performance as a first optimal characteristic; and the feature selection module is used for pairing the first optimal feature with each of the rest features, training a prediction model, and selecting a pair of features with the highest performance as an optimal subset of the two features until a preset feature condition is met after one feature is added.
In an exemplary embodiment of the present disclosure, the parameter determination module includes: the screening control module is used for screening the electromyographic data based on the target channel and the target characteristic; and the model determining module is used for inputting the screened electromyographic data into the multilayer perceptron to train to obtain the prediction model, and testing the prediction model to obtain the prediction result of each finger.
In an exemplary embodiment of the present disclosure, the model determination includes: the training data acquisition module is used for acquiring myoelectric data and determining training data from the myoelectric data; and the prediction model training module is used for training the multilayer perceptron according to the training data and adjusting the parameters of the multilayer perceptron based on the comparison result of the prediction result and the labeling result of the training data so as to obtain the trained multilayer perceptron as the prediction model.
In an exemplary embodiment of the present disclosure, after obtaining the prediction model, the apparatus further includes: and the prediction model testing module is used for determining test data from the electromyographic data and testing the prediction model based on the test data so as to process the prediction model according to a test result.
In an exemplary embodiment of the present disclosure, the control module includes: the online testing module is used for performing action testing on the artificial limb according to the prediction result of each finger to obtain an action testing result so as to determine whether the parameter online testing is passed or not according to the action testing result; and the control execution module is used for controlling the artificial limb according to the parameters of the prediction model corresponding to the prediction result if the parameters pass the online test.
In addition, the specific details of each part in the above prosthesis control device have been described in detail in the embodiments of the prosthesis control method part, and the details that are not disclosed can be referred to the embodiments of the method part, and thus are not described again.
It should be noted that although in the above detailed description several modules or units of the device for action execution are mentioned, such a division is not mandatory. Indeed, the features and functionality of two or more modules or units described above may be embodied in one module or unit, according to embodiments of the present disclosure. Conversely, the features and functions of one module or unit described above may be further divided into embodiments by a plurality of modules or units.
Moreover, although the steps of the methods of the present disclosure are depicted in the drawings in a particular order, this does not require or imply that the steps must be performed in this particular order, or that all of the depicted steps must be performed, to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions, etc.
In the embodiment of the disclosure, an electronic device capable of implementing the method is also provided.
As will be appreciated by one skilled in the art, aspects of the present disclosure may be embodied as a system, method or program product. Accordingly, various aspects of the present disclosure may be embodied in the form of: an entirely hardware embodiment, an entirely software embodiment (including firmware, microcode, etc.) or an embodiment combining hardware and software aspects that may all generally be referred to herein as a "circuit," module "or" system.
An electronic device 500 according to this embodiment of the disclosure is described below with reference to fig. 5. The electronic device 500 shown in fig. 5 is only an example and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 5, the electronic device 500 is embodied in the form of a general purpose computing device. The components of the electronic device 500 may include, but are not limited to: the at least one processing unit 510, the at least one memory unit 520, a bus 530 connecting the various system components (including the memory unit 520 and the processing unit 510), and a display unit 540.
Wherein the storage unit stores program code that is executable by the processing unit 510 to cause the processing unit 510 to perform steps according to various exemplary embodiments of the present disclosure as described in the above section "exemplary methods" of this specification. For example, the processing unit 510 may perform the steps as shown in fig. 2.
The memory unit 520 may include a readable medium in the form of a volatile memory unit, such as a random access memory unit (RAM)5201 and/or a cache memory unit 5202, and may further include a read only memory unit (ROM) 5203.
Storage unit 520 may also include a program/utility 5204 having a set (at least one) of program modules 5205, such program modules 5205 including, but not limited to: an operating system, one or more application programs, other program modules, and program data, each of which, or some combination thereof, may comprise an implementation of a network environment.
Bus 530 may be one or more of several types of bus structures, including a memory unit bus or memory unit controller, a peripheral bus, a graphics acceleration interface, a processing unit, or a local bus using any of a variety of bus architectures.
The electronic device 500 may also communicate with one or more external devices 600 (e.g., keyboard, pointing device, bluetooth device, etc.), with one or more devices that enable a user to interact with the electronic device 500, and/or with any devices (e.g., router, modem, etc.) that enable the electronic device 500 to communicate with one or more other computing devices. Such communication may occur via input/output (I/O) interfaces 550. Also, the electronic device 500 may communicate with one or more networks (e.g., a Local Area Network (LAN), a Wide Area Network (WAN), and/or a public network, such as the internet) via the network adapter 560. As shown, the network adapter 560 communicates with the other modules of the electronic device 500 over the bus 530. It should be appreciated that although not shown in the figures, other hardware and/or software modules may be used in conjunction with the electronic device 500, including but not limited to: microcode, device drivers, redundant processing units, external disk drive arrays, RAID systems, tape drives, and data backup storage systems, among others.
In an embodiment of the present disclosure, a computer-readable storage medium is further provided, on which a program product capable of implementing the above-mentioned method of the present specification is stored. In some possible embodiments, various aspects of the disclosure may also be implemented in the form of a program product comprising program code for causing a terminal device to perform the steps according to various exemplary embodiments of the disclosure described in the "exemplary methods" section above of this specification, when the program product is run on the terminal device.
According to the program product for implementing the above method of the embodiments of the present disclosure, it may employ a portable compact disc read only memory (CD-ROM) and include program codes, and may be run on a terminal device, such as a personal computer. However, the program product of the present disclosure is not limited thereto, and in this document, a readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
A computer readable signal medium may include a propagated data signal with readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A readable signal medium may also be any readable medium that is not a readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device.
Program code embodied on a readable medium may be transmitted using any appropriate medium, including but not limited to wireless, wireline, optical fiber cable, RF, etc., or any suitable combination of the foregoing.
Program code for carrying out operations for the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server. In the case of a remote computing device, the remote computing device may be connected to the user computing device through any kind of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or may be connected to an external computing device (e.g., through the internet using an internet service provider).
Furthermore, the above-described figures are merely schematic illustrations of processes included in methods according to embodiments of the present disclosure, and are not intended to be limiting. It will be readily understood that the processes shown in the above figures are not intended to indicate or limit the chronological order of the processes. In addition, it is also readily understood that these processes may be performed synchronously or asynchronously, e.g., in multiple modules.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is to be limited only by the terms of the appended claims.

Claims (10)

1. A prosthesis control method, comprising:
acquiring electromyographic data of a relevant part corresponding to an abnormal part when the abnormal part simulation action is adopted through a plurality of channels, and determining a plurality of characteristics according to the electromyographic data;
determining a target channel from the plurality of channels and a target feature from the plurality of features;
training and decoding the electromyographic data based on the target channel and the target characteristics, obtaining a prediction result of each finger in motion simulation, and obtaining parameters of a prediction model corresponding to each finger;
and if the parameter online test is passed, controlling the artificial limb of the abnormal part according to the prediction result.
2. A prosthetic control method according to claim 1, wherein the determining a target channel from the plurality of channels includes:
training a prediction model for each channel, and taking the channel with the highest performance as a first optimal channel;
and matching the first optimal channel with each of the rest channels, training a prediction model, and selecting a pair of channels with the highest performance as an optimal subset of the two channels until a preset channel condition is met after one channel is added.
3. A prosthetic control method according to claim 1, wherein the determining a target feature from the plurality of features includes:
training a prediction model for each feature, and taking the feature with the highest performance as a first optimal feature;
and matching the first optimal feature with each of the rest features, training a prediction model, and selecting a pair of features with the highest performance as an optimal subset of the two features until a preset feature condition is met after one feature is added.
4. A prosthetic control method according to claim 1, wherein the training and decoding of the electromyographic data based on the target pathway and the target features to obtain a prediction for each finger in simulated motion comprises:
screening the electromyographic data based on the target channel and the target feature;
inputting the screened electromyographic data into a multilayer perceptron to train to obtain the prediction model, and testing the prediction model to obtain the prediction result of each finger.
5. A prosthetic control method according to claim 4, wherein the inputting of the screened electromyographic data into a multi-layered sensor for training to obtain the predictive model comprises:
acquiring myoelectric data, and determining training data from the myoelectric data;
and training the multilayer perceptron according to the training data, and adjusting parameters of the multilayer perceptron based on a comparison result of a prediction result and a labeling result of the training data to obtain the trained multilayer perceptron as the prediction model.
6. A prosthetic control method according to claim 5, wherein after deriving the predictive model, the method further comprises:
and determining test data from the electromyographic data, and testing the prediction model based on the test data so as to process the prediction model according to a test result.
7. A prosthetic control method according to claim 1, wherein the controlling the prosthetic of the abnormal portion according to the prediction result if the parameter online test passes includes:
according to the prediction result of each finger, performing an action test on the artificial limb to obtain an action test result so as to determine whether the parameter online test passes or not according to the action test result;
and if the parameter online test is passed, controlling the artificial limb according to the parameter of the prediction model corresponding to the prediction result.
8. A prosthetic control device, comprising:
the data acquisition module is used for acquiring myoelectric data of a relevant part corresponding to an abnormal part when abnormal part simulation action is adopted through a plurality of channels and determining a plurality of characteristics according to the myoelectric data;
a screening module for determining a target channel from the plurality of channels and a target feature from the plurality of features;
the parameter determination module is used for training and decoding the electromyographic data based on the target channel and the target characteristics, obtaining a prediction result of each finger in motion simulation, and obtaining parameters of a prediction model corresponding to each finger;
and the control module is used for controlling the artificial limb of the abnormal part according to the prediction result if the parameter online test passes.
9. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the prosthesis control method according to any one of claims 1 to 7.
10. An electronic device, comprising:
a processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the prosthesis control method of any one of claims 1-7 via execution of the executable instructions.
CN202010148338.7A 2020-03-05 2020-03-05 Artificial limb control method and device, storage medium and electronic equipment Pending CN111374808A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010148338.7A CN111374808A (en) 2020-03-05 2020-03-05 Artificial limb control method and device, storage medium and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010148338.7A CN111374808A (en) 2020-03-05 2020-03-05 Artificial limb control method and device, storage medium and electronic equipment

Publications (1)

Publication Number Publication Date
CN111374808A true CN111374808A (en) 2020-07-07

Family

ID=71218672

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010148338.7A Pending CN111374808A (en) 2020-03-05 2020-03-05 Artificial limb control method and device, storage medium and electronic equipment

Country Status (1)

Country Link
CN (1) CN111374808A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132192A (en) * 2020-09-07 2020-12-25 北京海益同展信息科技有限公司 Model training method and device, electronic equipment and storage medium
CN114224577A (en) * 2022-02-24 2022-03-25 深圳市心流科技有限公司 Training method and device for intelligent artificial limb, electronic equipment, intelligent artificial limb and medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1043003A1 (en) * 1999-03-22 2000-10-11 Advanced Control Research Ltd. Prosthetic limbs
CN103892945A (en) * 2012-12-27 2014-07-02 中国科学院深圳先进技术研究院 Myoelectric prosthesis control system
CN109009586A (en) * 2018-06-25 2018-12-18 西安交通大学 A kind of myoelectricity continuous decoding method of the man-machine natural driving angle of artificial hand wrist joint

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1043003A1 (en) * 1999-03-22 2000-10-11 Advanced Control Research Ltd. Prosthetic limbs
CN103892945A (en) * 2012-12-27 2014-07-02 中国科学院深圳先进技术研究院 Myoelectric prosthesis control system
CN109009586A (en) * 2018-06-25 2018-12-18 西安交通大学 A kind of myoelectricity continuous decoding method of the man-machine natural driving angle of artificial hand wrist joint

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马乐乐等: "《一种基于梯度提升树的肌电信号最优通道选择方法》", 《信息与控制》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112132192A (en) * 2020-09-07 2020-12-25 北京海益同展信息科技有限公司 Model training method and device, electronic equipment and storage medium
CN114224577A (en) * 2022-02-24 2022-03-25 深圳市心流科技有限公司 Training method and device for intelligent artificial limb, electronic equipment, intelligent artificial limb and medium

Similar Documents

Publication Publication Date Title
US11361522B2 (en) User-controlled tuning of handstate representation model parameters
Tam et al. A fully embedded adaptive real-time hand gesture classifier leveraging HD-sEMG and deep learning
Ma et al. Continuous estimation of upper limb joint angle from sEMG signals based on SCA-LSTM deep learning approach
Shim et al. Joint active feature acquisition and classification with variable-size set encoding
Caruana et al. Case-based explanation of non-case-based learning methods.
Yu et al. Application of PSO-RBF neural network in gesture recognition of continuous surface EMG signals
Shim et al. Multi-channel electromyography pattern classification using deep belief networks for enhanced user experience
JP2006503350A (en) Personal data entry device with limited hand capabilities
CN111374808A (en) Artificial limb control method and device, storage medium and electronic equipment
Zanini et al. Parkinson’s disease emg signal prediction using neural networks
Atitallah et al. Simultaneous pressure sensors monitoring system for hand gestures recognition
Marinelli et al. Performance evaluation of pattern recognition algorithms for upper limb prosthetic applications
Pallotti et al. Measurements comparison of finger joint angles in hand postures between an sEMG armband and a sensory glove
Jo et al. Real-time hand gesture classification using crnn with scale average wavelet transform
Copaci et al. sEMG-based gesture classifier for a rehabilitation glove
Vásconez et al. A hand gesture recognition system using EMG and reinforcement learning: a Q-learning approach
Prahm et al. Combining two open source tools for neural computation (BioPatRec and Netlab) improves movement classification for prosthetic control
Nia et al. EMG-Based Hand Gestures Classification Using Machine Learning Algorithms
Yadav et al. Supervised learning technique for prediction of diseases
Leone et al. On-FPGA spiking neural networks for multi-variable end-to-end neural decoding
Nayak et al. Cognitive computing in software evaluation
Al-Qaisy et al. AI-based portable gesture recognition system for hearing impaired people using wearable sensors
Zheng A noble classification framework for data glove classification of a large number of hand movements
Zhou et al. Incremental learning in multiple limb positions for electromyography-based gesture recognition using hyperdimensional computing
Ghildiyal et al. Electromyography pattern-recognition based prosthetic limb control using various machine learning techniques

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: BEIJING HAIYI TONGZHAN INFORMATION TECHNOLOGY Co.,Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200707