CN114224483B - Training method for artificial limb control, terminal equipment and computer readable storage medium - Google Patents

Training method for artificial limb control, terminal equipment and computer readable storage medium Download PDF

Info

Publication number
CN114224483B
CN114224483B CN202111410424.1A CN202111410424A CN114224483B CN 114224483 B CN114224483 B CN 114224483B CN 202111410424 A CN202111410424 A CN 202111410424A CN 114224483 B CN114224483 B CN 114224483B
Authority
CN
China
Prior art keywords
action
training
user
actual
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111410424.1A
Other languages
Chinese (zh)
Other versions
CN114224483A (en
Inventor
韩璧丞
黄琦
周建吾
王俊霖
古月
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Qiangnao Technology Co ltd
Original Assignee
Zhejiang Qiangnao Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Qiangnao Technology Co ltd filed Critical Zhejiang Qiangnao Technology Co ltd
Priority to CN202111410424.1A priority Critical patent/CN114224483B/en
Publication of CN114224483A publication Critical patent/CN114224483A/en
Application granted granted Critical
Publication of CN114224483B publication Critical patent/CN114224483B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F2/00Filters implantable into blood vessels; Prostheses, i.e. artificial substitutes or replacements for parts of the body; Appliances for connecting them with the body; Devices providing patency to, or preventing collapsing of, tubular structures of the body, e.g. stents
    • A61F2/50Prostheses not implantable in the body
    • A61F2/68Operating or control means
    • A61F2/70Operating or control means electrical
    • A61F2/72Bioelectric control, e.g. myoelectric
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • G06F3/015Input arrangements based on nervous system activity detection, e.g. brain waves [EEG] detection, electromyograms [EMG] detection, electrodermal response detection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B34/00Computer-aided surgery; Manipulators or robots specially adapted for use in surgery
    • A61B34/10Computer-aided planning, simulation or modelling of surgical operations
    • A61B2034/101Computer-aided simulation of surgical operations
    • A61B2034/102Modelling of surgical devices, implants or prosthesis
    • A61B2034/104Modelling the effect of the tool, e.g. the effect of an implanted prosthesis or for predicting the effect of ablation or burring

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Medical Informatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Primary Health Care (AREA)
  • Animal Behavior & Ethology (AREA)
  • Surgery (AREA)
  • Theoretical Computer Science (AREA)
  • Veterinary Medicine (AREA)
  • Epidemiology (AREA)
  • Transplantation (AREA)
  • Pathology (AREA)
  • Vascular Medicine (AREA)
  • Biophysics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Cardiology (AREA)
  • Molecular Biology (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dermatology (AREA)
  • Neurology (AREA)
  • Neurosurgery (AREA)
  • Robotics (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Prostheses (AREA)

Abstract

The invention discloses a training method for artificial limb control, terminal equipment and a computer-readable storage medium, wherein the method comprises the following steps: acquiring actual nerve electricity and muscle electrical signals when a user performs actions based on a target action instruction given by a current training course; determining actual motion information generated by the artificial limb model during the motion of the user according to the actual nerve electricity and the muscle electricity signals; and determining the accuracy of completing the target action by the user according to the actual action information and the reference action information corresponding to the target action instruction given by the current training course. According to the training method and the training system, the user controls the action actually presented by the artificial limb model when training based on the training course, and the target action determined by the reference action information corresponding to the training course is used for evaluating the actual effect of the user training, so that the accuracy of the target action completed by the user is determined, and the training effect of the user training is quantified.

Description

Training method for artificial limb control, terminal equipment and computer readable storage medium
Technical Field
The present invention relates to the field of auxiliary training technologies, and in particular, to a training method for controlling a prosthetic limb, a terminal device, and a computer readable storage medium.
Background
The user of the prosthesis can consume more time and energy than normal people to complete the grabbing action, and the user needs to be trained repeatedly for a long time to achieve accurate control. However, the existing artificial limb uses the visual observation of a rehabilitation training doctor to evaluate the training effect, so that deviation is easy to occur, and the training effect cannot be effectively improved and objectively quantified.
The foregoing is provided merely for the purpose of facilitating understanding of the technical solutions of the present invention and is not intended to represent an admission that the foregoing is prior art.
Disclosure of Invention
The invention mainly aims to provide a training method, terminal equipment and a computer readable storage medium for artificial limb control, and aims to solve the problem that training effects cannot be effectively improved and objectively and quantitatively checked by means of visual observation of rehabilitation training doctors.
To achieve the above object, the present invention provides a training method of prosthesis control, the training method of prosthesis control including:
acquiring actual nerve electricity and muscle electrical signals when a user performs actions based on a target action instruction given by a current training course;
determining actual motion information generated by the artificial limb model during the motion of the user according to the actual nerve electricity and the muscle electricity signals;
And determining the accuracy of completing the target action by the user according to the actual action information and the reference action information corresponding to the target action instruction given by the current training course.
Optionally, the step of determining actual motion information generated when the user acts from the actual nerve electrical signals and muscle electrical signals includes:
determining a corresponding action control instruction when the user acts according to the actual nerve electricity and the muscle electricity signals;
acquiring motion control parameters corresponding to the motion control instructions;
and controlling the motion of the artificial limb model according to the motion control parameters so as to determine actual motion information generated by the artificial limb model when the user acts.
Optionally, the determining the accuracy of the user to complete the target action according to the actual action information and the reference action information corresponding to the target action instruction given by the current training course includes:
comparing the actual nerve electricity and muscle electricity signals with reference action information corresponding to a target action instruction given by the current training course to obtain a signal comparison result;
and determining the accuracy of the user to finish the target action according to the actual action information, the reference action information corresponding to the current training course and the signal comparison result.
Optionally, the step of comparing the actual nerve electricity and muscle electricity signals with reference action signals corresponding to the current training course to obtain a signal comparison result includes:
acquiring sub-nerve electric signals and sub-muscle electric signals of all channels corresponding to the actual nerve electric signals and the muscle electric signals;
obtaining sub-standard action signals of all channels corresponding to the standard action signals of the current training course;
comparing the sub-nerve electrical and sub-muscle electrical signals and the sub-reference motion signals of the channels based on each identical channel to determine a signal comparison result of each of the channels, and/or determining a correlation coefficient of each of the channels based on the sub-nerve electrical and sub-muscle electrical signals and the sub-reference motion signals of the channels based on each identical channel;
and obtaining a signal comparison result according to the signal comparison result of each channel and/or the correlation coefficient of each channel.
Optionally, after determining the accuracy step of completing the target action by the user according to the actual action information and the reference action information corresponding to the target action instruction given by the current training course, the method includes:
And determining the course to be trained of the user according to the accuracy of the target action completed by the user.
Optionally, the step of determining the course to be trained for the user according to the accuracy of the user completing the target action includes:
when the accuracy is greater than or equal to a preset accuracy, determining the next training course adjacent to the current training course as a course to be trained of the user;
and when the accuracy is smaller than the preset accuracy, determining the course to be trained of the user as the current training course.
Optionally, the step of determining actual motion information generated by the prosthesis model during the user motion based on the actual nerve electrical signals and muscle electrical signals comprises:
determining actual action information generated by the artificial limb model when the user acts within a preset duration according to the actual nerve electricity and the muscle electricity signals;
the step of determining the accuracy of the user completing the target action according to the actual action information and the reference action information corresponding to the target action instruction given by the current training course comprises the following steps:
determining the number of times of completion of a target action and the total number of times of actions when the artificial limb model acts within a preset duration according to the actual action information and the reference action information corresponding to the target action instruction given by the current training course, wherein the target action is a training action determined by the reference action information corresponding to the current training course;
And determining the accuracy of the artificial limb model in the action within a preset duration according to the completion times and the total times so as to obtain the accuracy of the user in completing the target action.
Optionally, the step of determining actual motion information generated by the prosthesis model during the user motion based on the actual nerve electrical signals and muscle electrical signals comprises:
determining actual action information generated by the artificial limb model when the user acts within a preset duration according to the actual nerve electricity and the muscle electricity signals;
the step of determining the accuracy of the user completing the target action according to the actual action information and the reference action information corresponding to the target action instruction given by the current training course comprises the following steps:
determining the duration of the training action determined by the reference action information corresponding to the current training course according to the action information and the reference action information corresponding to the target action instruction given by the current training course, and updating the number of times of the training action determined by the reference action information corresponding to the current training course;
and determining the accuracy of the user completing the target action according to the duration and the times.
In addition, to achieve the above object, the present invention also provides a terminal device including: the system comprises a memory, a processor and a prosthesis control training program stored in the memory and executable on the processor, wherein the prosthesis control training program, when executed by the processor, implements the steps of the prosthesis control training method as described above.
In addition, in order to achieve the above object, the present invention also provides a computer-readable storage medium having stored thereon a training program of a prosthesis control, which when executed by the processor, implements the respective steps of the training method of a prosthesis control as described above.
According to the training method, the terminal equipment and the computer readable storage medium for the artificial limb control, on the premise that the current training course is clear, the actual neural electricity and the muscle electrical signals generated by the artificial limb model during the user training are obtained to determine the actual action information generated by the artificial limb model during the user action, and the accuracy of the target action of the user is determined according to the actual action information and the reference action information corresponding to the current training course, namely, the user controls the action actually presented by the artificial limb model during the training based on the training course, and the target action determined by the reference action information corresponding to the training course is used for evaluating the actual effect of the user training, so that the accuracy of the target action of the user is determined, the training effect of the user is quantized, and the quality and the effect of the user training are improved.
Drawings
FIG. 1 is a schematic diagram of a terminal device involved in various embodiments of the prosthetic control training method of the present invention;
FIG. 2 is a schematic flow chart of a first embodiment of the training method of the prosthetic control of the present invention;
FIG. 3 is a schematic diagram of the motion control of a prosthesis model by motion control parameters according to the first embodiment of the prosthesis training method of the present invention;
FIG. 4 is a schematic flow chart of a second embodiment of the training method of the prosthetic control of the present invention;
FIG. 5 is a graph of 8-channel nerve electrical and muscle electrical signals;
FIG. 6 is an 8-channel radar chart;
FIG. 7 is a flow chart of a third embodiment of the training method of the prosthetic control of the present invention;
FIG. 8 is a flow chart of a fourth embodiment of the prosthesis control training method of the present invention;
fig. 9 is a schematic diagram of a grasping process.
The achievement of the objects, functional features and advantages of the present invention will be further described with reference to the accompanying drawings, in conjunction with the embodiments.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the scope of the invention.
In the following description, suffixes such as "module", "part" or "unit" for representing elements are used only for facilitating the description of the present invention, and have no specific meaning per se. Thus, "module," "component," or "unit" may be used in combination.
The terminal device may be implemented in various forms. For example, the terminal devices described in the present invention may include mobile terminals such as cell phones, tablet computers, notebook computers, palm computers, personal digital assistants (Personal Digital Assistant, PDA), portable media players (Portable Media Player, PMP), navigation devices, wearable devices, smart bracelets, pedometers, and the like.
It will be appreciated by those skilled in the art that the configuration according to the embodiment of the present invention can be applied to a fixed type mobile terminal in addition to an element particularly used for a moving purpose.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a terminal device according to various embodiments of the training method for controlling a prosthetic limb according to the present invention. As shown in fig. 1, the terminal device may include: a memory 101 and a processor 102. It will be appreciated by those skilled in the art that the block diagram of the terminal shown in fig. 1 is not limiting of the terminal, and that the terminal may include more or fewer components than shown, or may combine certain components, or a different arrangement of components. The memory 101 stores therein an operating system and a training program for controlling the prosthesis. Processor 102 is the control center of the terminal device, and processor 102 executes a training program for prosthetic control stored in memory 101 to implement the steps of the various embodiments of the training method for prosthetic control of the present invention. Optionally, the terminal device may further include a display unit 103, where the display unit 103 includes a display panel, and the display panel may be configured in the form of a liquid crystal display (Liquid Crystal Display, LCD), an Organic Light-Emitting Diode (OLED), or the like, for outputting and displaying an interface browsed by the user.
Based on the above-mentioned block diagram of the terminal device, various embodiments of the training method of prosthetic control of the present invention are presented.
In a first embodiment, the present invention provides a training method for controlling a prosthetic limb, please refer to fig. 2, fig. 2 is a flow chart of a first embodiment of the training method for controlling a prosthetic limb according to the present invention. In this embodiment, the training method of the prosthesis control includes the steps of:
step S10, acquiring actual nerve electricity and muscle electrical signals when a user performs actions based on target action instructions given by current training course training;
the nerve and muscle electrical signals, i.e., myoelectric signals, are the temporal and spatial superposition of the motor action potentials in numerous muscle fibers. The surface nerve electricity and muscle electric signals are the combined effect of the electric activity on the superficial muscles and nerve trunks on the skin surface, and can reflect the activity of nerve muscles to a certain extent.
In the practical application process, when a user performs controlled artificial limb training, the user can perform action training according to the action instruction by outputting the action instruction corresponding to the training action of the training course, when the user performs action training based on the action instruction, the user generates action according to the action instruction to cause muscle activity, the actual nerve electricity and the muscle electric signal during the training of the user based on the current training course are obtained, the actual nerve electricity and the muscle electric signal during the training of the user can be obtained through collecting the myoelectric sensor, and the actual nerve electricity and the muscle electric signal during the training of the user can be obtained through collecting the arm ring or the receiving cavity arranged on the arm of the user.
Step S20, determining actual motion information generated by the artificial limb model during the motion of the user according to the actual nerve electricity and the muscle electricity signals;
before training, the user builds a prosthesis model and a coordinate system by using three-dimensional drawing software, and introduces a graph development kit into the prosthesis model, and in addition, the training course, the training action corresponding to the training course, the nerve electricity and muscle electric signals corresponding to the training action, the action information corresponding to the training action, and the action control instruction are used for determining the corresponding relation between the motion control parameters corresponding to the prosthesis model, and can be pre-recorded into a database of the system, wherein the nerve electricity and the muscle electric signals corresponding to the training action are used as reference action signals (namely, reference nerve electricity and reference muscle electric signals), and the action control instruction is used for determining the motion control parameters corresponding to the prosthesis model to control the motion of the prosthesis model.
In this embodiment, through a prosthetic training system based on virtual reality, the user is assisted in training the prosthetic control, and in the case that the user does not wear the prosthetic, the user is instructed to train by outputting an action instruction corresponding to a training action indicated by a training course, so as to obtain actual nerve electricity and muscle electrical signals when the user trains based on the current training course, further control virtual prosthetic model movement according to the action information determined according to the actual nerve electricity and muscle electrical signals, and further determine whether the user accurately makes an action corresponding to the training course according to the instruction of the training course when training, so as to evaluate the training effect of the user.
As an alternative embodiment, step S20 includes:
determining a corresponding action control instruction when the user acts according to the actual nerve electricity and the muscle electricity signals;
acquiring motion control parameters corresponding to the motion control instructions;
and controlling the motion of the artificial limb model according to the motion control parameters so as to determine actual motion information generated by the artificial limb model when the user acts.
In the practical application process, when a user receives an action instruction corresponding to a training action, the user generates an action according to the action instruction to cause muscle action, acquires actual nerve electricity and muscle electric signals of the user, further compares the actual nerve electricity and muscle electric signals with nerve electricity and muscle electric signals corresponding to pre-stored training actions, and can acquire an action control instruction corresponding to the pre-stored training action when the actual nerve electricity and muscle electric signals are matched with the nerve electricity and muscle electric signals corresponding to the pre-stored training action so as to determine the action control instruction corresponding to the user action.
Optionally, the step of obtaining the motion control parameter corresponding to the motion control instruction includes:
acquiring a rotation angle of each joint in the artificial limb model corresponding to the action control instruction;
And determining target coordinate parameters after the joint motion according to the rotation angle of each joint motion and a preset reference coordinate system to obtain motion control parameters.
It should be noted that, based on the three-dimensional drawing software, a value model and a coordinate system are constructed, the assembly relationship of each joint of the artificial limb model and the joint coordinate system are set, a reference coordinate system is established according to the kinematics of the artificial limb, and a rotation matrix of each joint coordinate system moving relative to the reference coordinate system is obtained, specifically, reference may be made to fig. 3, and fig. 3 is a schematic diagram of the motion of the model controlled by the motion control parameters in the first embodiment of the artificial limb training method of the present invention. That is, the rotation angle of each joint in the artificial limb model corresponding to the determined motion information is obtained, the target coordinate parameter after each joint motion is determined according to the rotation angle of each joint motion based on a preset reference coordinate system, so as to obtain motion control parameters, and the motion of the artificial limb model is controlled by the motion control parameters, so that the motion of the artificial limb model is controlled by muscle activity caused by the motion generated by a user according to the motion instruction of a training course to be visualized.
It is understood that the correspondence between the motion control command and the rotation angle of each joint of the prosthetic model during motion may be set in advance, and the rotation angle of each joint of the prosthetic model corresponding to the motion control command may be obtained on the premise of determining the motion control command.
And controlling the motion of the artificial limb model according to the motion control parameters to determine the actual motion information generated by the artificial limb model when the user moves, controlling the motion of the artificial limb model through the motion control parameters to enable the user to visualize the motion of the artificial limb model according to the muscle activity control caused by the motion generated by the motion instruction of the training course, namely, controlling the motion of the artificial limb model according to the motion control parameters and acquiring the actual motion information when the artificial limb model moves according to the motion control parameters, wherein the motion information refers to the motion feedback information, such as fist making, semi-fist making and the like, of the artificial limb model after the motion according to the motion control parameters, and optionally, the motion information comprises at least one of a motion name and a motion image.
Optionally, the action name may be determined through a training action corresponding to the action information, or may be obtained by collecting action image recognition of the prosthesis model after acting according to the motion control parameters, which is not limited.
And step S30, determining the accuracy of completing the target action by the user according to the actual action information and the reference action information corresponding to the target action instruction given by the current training course.
It should be noted that, according to the actual action information and the reference action information corresponding to the target action instruction corresponding to the current training course, determining the accuracy of the user completing the target action, when the action information only includes the action name, the training action name of the target action determined by the reference action information corresponding to the action name and the training course can be compared, when the action name is the same as the training action name, determining that the accuracy of the user completing the target action is 100%, otherwise, the accuracy of the user completing the target action is 0; when the motion information only includes the motion image, the similarity of the motion image and the training motion pattern of the target motion determined by the reference motion information corresponding to the training course may be obtained, and the accuracy of the user completing the target motion may be determined according to the similarity, for example, a preset similarity interval in which the similarity is located may be obtained, and further the accuracy of the user completing the target motion may be determined according to the preset similarity interval, for example: the similarity interval is 0 to 30 percent, and the accuracy of finishing the target action by the user is 40 percent; the similarity interval is 30-60%, and the accuracy of the user for completing the target action is 60%; the accuracy of the user for completing the target action is 90% in the similar interval of 60-100%, and the method is not limited; when the action information comprises an action name and an action image, the action name and the training action name of the target action determined by the reference action information corresponding to the training course can be compared, a first reference accuracy can be obtained according to the comparison result, the similarity of the action image and the training action graph of the target action determined by the reference action information corresponding to the training course can be obtained, a second reference accuracy can be obtained according to the similarity, further weight values respectively corresponding to the first reference accuracy and the second reference accuracy can be obtained, and the accuracy of the target action completed by the user can be determined according to the first reference accuracy, the weight value of the second reference accuracy and the weight value of the second reference accuracy.
In this embodiment, on the premise of defining the current training course, the actual neural electricity and the muscle electrical signals generated during the training of the user are obtained to determine the corresponding motion information during the actual training generated during the motion of the user, and further, the motion control parameters corresponding to the artificial limb model are determined through the motion control instructions to control the motion of the artificial limb model, so as to determine the motion actually presented by the artificial limb model during the training of the user, and further, the accuracy of the target motion of the user is determined according to the obtained actual motion information of the artificial limb model during the motion according to the motion control parameters and the reference motion information corresponding to the current training course, namely, the actual effect of the training of the user is evaluated through the target motion determined by the reference motion information corresponding to the training course during the training of the user, and further, the accuracy of the target motion of the user is determined, so that the training effect of the user is quantified, and meanwhile, the quality and the effect of the training of the user are improved.
As an alternative embodiment, step S30 includes, after:
and determining the course to be trained of the user according to the accuracy.
The method comprises the steps that through the fact that a user controls actions actually presented by a prosthetic model when training based on a training course, target actions determined by reference action information corresponding to the training course are used for evaluating actual effects of user training, further accuracy of the target actions is determined, and training effects of user training are quantized, further, quality and effects of user training can be improved through determining the course to be trained of the user according to the accuracy, specifically, when the accuracy is greater than or equal to preset accuracy, the fact that the actions actually presented by the prosthetic model are matched with the target actions in a large probability when the user is trained based on the training course is shown, the next course to be trained adjacent to the current training course is determined to be the course to be trained of the user, and the user enters the next course to be trained to conduct training; when the accuracy is smaller than the preset accuracy, the fact that the action actually presented by the artificial limb model is controlled by the user to be in mismatch with the target action in the large probability during training based on the training course is indicated, the course to be trained of the user is determined to be the current training course, training is repeated, and further the accuracy of the target action completed by the user is improved until the accuracy is larger than or equal to the preset accuracy, and the user enters the next course to be trained to train.
As an alternative embodiment, step S30 includes, after:
and outputting the accuracy of completing the target action by the user so that the user quantitatively knows the accuracy of completing the target action by the user, and the quality and effect of user training are improved while the training effect of user training is quantified.
For the convenience of understanding the embodiment, for example, when an action instruction of full-grip is received, a user generates action to cause muscle activity, acquires actual nerve electricity and muscle electrical signals of the user, and by comparing the nerve electricity and muscle electrical signals corresponding to the actual nerve electricity and muscle electrical signals and pre-stored training actions, in one case, the action generated by the user is the full-grip to cause muscle activity, the matching degree of the actual nerve electricity and muscle electrical signals and the nerve electricity and muscle electrical signals of the training action as the full-grip is higher, the action information corresponding to the training action as the full-grip is acquired, and then the action control parameters corresponding to the action information are acquired, so that the action of the artificial limb model is controlled through the action control parameters, namely the artificial limb model is the full-grip, so that the action generated by the user causes the muscle activity to control the artificial limb model to perform visualization, and the action generated by the user is confirmed to cause the muscle activity to be matched with the action instruction of the following training course, thereby achieving the purpose of instructing the user to perform action training on the full-grip;
Under the condition that the actual action generated by the user is the semi-fist making and causes muscle action, the matching degree of the actual nerve electricity and the muscle electric signal and the training action is higher, the action information corresponding to the training action as the semi-fist making is obtained, further the action control parameters corresponding to the action information are obtained, the action of the artificial limb model is controlled through the action control parameters, namely the artificial limb model is the semi-fist making, the action generated by the user causes the muscle action to control the action of the artificial limb model to perform visualization, and the fact that the muscle action generated by the user causes the mismatch of the muscle action and the action instruction of the full-fist making of the issued training course is confirmed can be corrected through the action, and the action strength and the amplitude of handshake can be increased to prompt the user to realize the action training of the full-fist making if the training state is currently the semi-fist making;
in another case, the neural electricity and the muscle electrical signals generated by the muscle activity caused by the action actually generated by the user are not matched with the neural electricity and the muscle electrical signals corresponding to the pre-stored training action, that is, the user does not accurately make the training action corresponding to the training course according to the action instruction of the training course, and prompt information can be output to prompt the user to act according to the action instruction of the training course. Optionally, the prompt information includes, but is not limited to, at least one of voice information, graphic information, and video information. The prompt information may include action step instruction information of the training action of the training course, so that the user can make the training action corresponding to the action instruction of the training course.
In the technical scheme disclosed in this embodiment, on the premise of defining the current training course, the actual neural electricity and the muscle electrical signals generated during the training of the user are obtained to determine the actual motion information generated by the artificial limb model during the motion of the user, so as to determine the accuracy of the target motion of the user according to the actual motion information and the reference motion information corresponding to the current training course, that is, the user controls the motion actually presented by the artificial limb model during the training based on the training course, and the target motion determined by the reference motion information corresponding to the training course evaluates the actual effect of the training of the user, so as to determine the accuracy of the target motion of the user, thereby improving the quality and effect of the training of the user while quantifying the training effect of the training of the user.
In a second embodiment of the training method of prosthetic control according to the present invention based on the first embodiment, please refer to fig. 4, fig. 4 is a flow chart of the second embodiment of the training method of prosthetic control according to the present invention. In this embodiment, step S30 includes:
step S31, comparing the actual nerve electricity and muscle electricity signals with reference action information corresponding to a target action instruction given by the current training course to obtain a signal comparison result;
And step S32, determining the accuracy of completing the target action by the user according to the actual action information, the reference action information corresponding to the current training course and the signal comparison result.
It is easy to understand that, to more accurately and comprehensively determine the accuracy of completing the target action by the user, the reference action information corresponding to the target action instruction given by the actual nerve electricity and muscle electricity signals and the current training course may be further compared, the accuracy of completing the target action by the user may be determined according to the actual action information, the reference action information corresponding to the current training course and the signal comparison result, specifically, the training action name of the target action determined by the action name and the reference action information corresponding to the training course may be compared, the first reference accuracy may be obtained according to the comparison result, the similarity of the training action pattern of the target action determined by the action image and the reference action information corresponding to the training course may be obtained, the second reference accuracy may be obtained according to the similarity, the third reference accuracy may be obtained according to the signal comparison result, and then the weight values corresponding to the first reference accuracy, the second reference accuracy and the third reference accuracy may be obtained respectively, and the user-completed target accuracy may be determined according to the first reference accuracy, the weight value of the first reference accuracy, the second reference accuracy, the weight value of the second reference accuracy and the third reference accuracy.
As an alternative embodiment, step S31 includes:
acquiring sub-nerve electricity and muscle electrical signals of each channel corresponding to the actual nerve electricity and muscle electrical signals;
obtaining sub-standard action signals of all channels corresponding to the standard action signals of the current training course;
comparing the sub-nerve electrical and sub-muscle electrical signals and the sub-reference motion signals of the channels based on each identical channel to determine a signal comparison result of each channel, and/or determining a correlation coefficient of each channel based on the sub-nerve electrical and sub-muscle electrical signals and the sub-reference motion signals of the channels based on each identical channel;
and obtaining a signal comparison result according to the signal comparison result of each channel and/or the correlation coefficient of each channel.
In the practical application process, the sub-nerve electric signals and the sub-muscle electric signals of all channels corresponding to the practical nerve electric signals and the muscle electric signals are acquired, and when a user acts according to the action instruction of the training course, the sub-nerve electric signals and the sub-muscle electric signals of all channels can be acquired through the arm ring (or the receiving cavity) multiple channels.
And acquiring the sub-standard action signals of each channel corresponding to the reference action signal of the current training course based on the corresponding relation between the preset reference action signal of the training course and the sub-standard action signals of each channel.
Comparing the electrical sub-nerve and electrical muscle signals of the channels and sub-baseline motion signals to determine a signal comparison for each channel:
in particular, the bioelectric and myoelectric signals of a channel can be regarded as an array of n discrete elements (the system employs the frequency f s Between 500Hz-1kHz, n=f s *t 1 ,t 1 Time registered for actual nerve and muscle electrical signals):
EMG={x 1 ,…,x n } (1)
each channel signal is taken as an absolute value (or square first and root second) as shown in equation (2) (or equation (3), equation (4)).
EMG abs ={|x 1 |,…,|x n |} (2)
Or (b)
Or (b)
Then, the first [20% n ] is calculated by arranging from large to small]Mean value of (rounding) the whole element, which is used as the sub-nerve electricity and muscle electrical signal value EMG of the channel T,j J represents the j-th lane, j=1, 2,..z (z is a positive integer).
Correspondingly, the sub-neuro-and muscular-electricity of the sub-action signal of the channel corresponding to the reference action signal of the training course is assumedSignal value EMG B,j The signal comparison result of the channel can pass through the channel training value EMG T,j And reference value EMG B,j Is determined by the ratio of:
if the signal comparison result (ratio) of each channel is 1, the action actually presented by the artificial limb model is indicated to be the same as the target action when the user is trained based on the training course, and if the signal comparison result (ratio) of each channel is not 1, the action actually presented by the artificial limb model is indicated to be different from the target action when the user is trained based on the training course.
Taking the sub-nerve electricity and the muscle electric signals corresponding to 8 channels as an example, fig. 5 is an 8-channel nerve electricity and muscle electric signal diagram, and fig. 6 is an 8-channel radar diagram, and by outputting motion information of the artificial limb model according to motion control parameters and the radar diagram, a user can conveniently and accurately judge each training effect.
The radar chart can intuitively output the result, but in order to enable a user to more accurately know the difference of the user, based on each same channel, the correlation coefficient of each channel is determined according to the sub-nerve electricity and sub-muscle electric signals of the channel and sub-standard action signals, the correlation coefficient is used for researching the linear correlation degree between two signal curves, and taking 8 channels as an example, the sub-standard action signal values EMG of 8 channels are calculated respectively X And the electrical sub-nerve and muscle signal values EMG during training Y The closer r is to 1 as shown in equation (5), indicating that the two signal curves are more correlated.
The results of each training are presented on the screen in table 1.
TABLE 1 correlation coefficient (data for reference)
r 1 r 2 r 3 r 4 r 5 r 6 r 7 r 8
0.55 0.45 0.78 0.66 0.32 1 1 1
It should be noted that, every time training is performed 1 time, the screen will display the radar chart and the correlation coefficient table of the training. Every 5 training, the system reminds the user to rest once and presents the 5 training results on the computer screen. When the training results of the user for 5 times are qualified (for example, when the 8-channel ratio in the 5 radar graphs is close to 1, and the numerical values in the 5 correlation coefficient tables are close to 1), the action training of the training course is completed, the user can change the action, and the next course to be trained is continued.
As an alternative embodiment, after step S32, the method further includes:
and outputting the actual action information, the reference action information and the signal comparison result, so that the user can conveniently compare and know the training effect of the user, and the quality and effect of the user training can be adjusted by the user.
In the technical scheme disclosed in this embodiment, the actual magnitude of the motion during the user training is determined by comparing the actual nerve electricity and muscle electricity signals with the reference motion signals corresponding to the training course, so as to further determine the strength training achieved by the user training, and further determine the course to be trained of the user by integrating the actual motion information, the reference motion information corresponding to the current training course and the comparison result of the reference motion signals corresponding to the actual nerve electricity and muscle electricity signals and the training course.
In a third embodiment of the training method for prosthetic control according to the present invention, please refer to fig. 7, fig. 7 is a schematic flow chart of the third embodiment of the training method for prosthetic control according to the present invention. In this embodiment, step S20 includes:
step S21, determining actual motion information generated by the artificial limb model when the user acts within a preset duration according to the actual nerve electricity and the muscle electricity signals;
step S30 includes:
step S33, determining the number of times of completion of a target action and the total number of times of actions when the artificial limb model acts within a preset duration according to the actual action information and the reference action information corresponding to the target action instruction given by the current training course, wherein the target action is a training action determined by the reference action information corresponding to the current training course;
and step S34, determining the accuracy of the artificial limb model in the action within a preset duration according to the completion times and the total times so as to obtain the accuracy of the user in completing the target action.
The motion information includes at least one of a motion name and a motion image. Optionally, the action information further includes a number of actions. The target action is a training action determined by reference action information corresponding to the current training course.
In the practical application process, for example, a fist-making instruction is randomly sent, the user generates a 5s fist-making exercise intention, and a fist-making response occurs immediately on the artificial limb model. However, the user may not be skilled in control, so that during the time of 5s, the artificial limb model may generate other motion responses such as opening, closing only the thumb and the index finger, etc., in this embodiment, to determine the accuracy of training according to the training course during the training process, the actual motion information generated by the artificial limb model during the motion of the user during the preset time period may be determined according to the actual nerve electricity and the muscle electricity signals, and further the number of times of completing the target motion and the total number of times of the motion when the artificial limb model acts during the preset time period may be determined according to the motion information, and specifically, the situation that the user controls the motion of the artificial limb model during the 5s may be counted through a statistics table, and the accuracy of completing the target motion by the user may be calculated.
Assuming that the training action of the training course is making a fist, the sampling frequency fs is 1000Hz, the sampling time t2 of the training signal is 5s, the system gives a response result every 100ms, 50 response results are taken as a whole, and finally the result is presented on a screen in the form of a table 2.
Table 2 accuracy statistics (data only referenced)
Fist-making Expanded Thumb bend Bending index finger Middle finger bending ... Unidentifiable with no recognition
10 times 2 times 1 time 3 times 1 time ... 2 times
The accuracy of this fist was 20% (i.e., (10/50) ×100%).
It should be noted that, in this embodiment, a Support Vector Data Description (SVDD) algorithm is introduced to judge that the training signal within 5s is a fist-making signal, and SVDD is a single-value classification algorithm, so that the distinction between the target sample and the non-target sample can be realized. Taking a fist making example, the system can randomly send action instructions, namely fist making signal data, to be regarded as a target sample x, and other action signal data are regarded as non-target samples, and in the SVDD, the calculation formulas of the sphere center and the radius of the target sample are respectively as follows:
wherein X is v ∈SV,K(X i ,X j ) Is a kernel function, equivalent to the inner product of samples in feature space, i.e., K (X i ,X j )={φ(X i ),φ(X j )}。
Wherein R is the radius of the sphere, a is the center of the sphere, x i Is the target sample point.
For training sample x test The distance to the center of the sphere is:
if d is less than or equal to R, indicating that the training sample is on or in the sphere, and belongs to fist making actions; otherwise, it does not belong to the group. If the training sample does not belong to the fist, the sample set (whether the training sample belongs to opening, thumb bending and the like) where the training sample is located is judged to be unrecognizable according to the method.
In the technical scheme disclosed in the embodiment, the actual action information generated by the artificial limb model when the user acts within the preset duration is determined according to the actual nerve electricity and the muscle electricity signals, so that the number of times the user finishes the target action and the total number of times of the actions are determined through the action information, the accuracy rate of the user finishing the target action within the preset duration is determined, the accuracy training of the user on the training action of the training course is evaluated, the training effect of quantifying the user training is achieved, and meanwhile the quality and the effect of the user training are further improved.
In a fourth embodiment of the training method for prosthetic control according to the present invention, please refer to fig. 8, fig. 8 is a flowchart of the fourth embodiment of the training method for prosthetic control according to the present invention. In this embodiment, step S20 includes:
step S22, determining actual motion information generated by the artificial limb model when the user acts within a preset duration according to the actual nerve electricity and the muscle electricity signals;
step S30 includes:
step S35, determining the duration of the training action determined by the reference action information corresponding to the current training course according to the actual action information and the reference action information corresponding to the target action instruction given by the current training course, and updating the number of times of the training action determined by the reference action information corresponding to the current training course;
And step S36, determining the accuracy of the user completing the target action according to the duration and the times.
The motion information includes at least one of a motion name and a motion image. Optionally, the action information further includes the number of actions and the duration of the actions.
In this embodiment, in order to evaluate the smoothness of the artificial limb controlled by the user, the training actions determined by the reference information corresponding to the training course are a grabbing process, and the grabbing process of controlling the artificial limb model includes five actions of stretching, pre-grabbing, releasing and recovering. Specifically, the duration of the training action determined by the reference action information corresponding to the current training course is determined according to the actual action information, and the number of times of the training action determined by the reference action information corresponding to the current training course is updated. Taking a spherical object as an example, the present embodiment will explain the training process in detail.
Before training, a three-dimensional graphic development kit is used for establishing a spherical virtual object in a virtual system. Only one virtual object is displayed each time the prosthesis executes an action instruction. The user only starts training when the system pops up a window on the interface that prompts to grab.
The user controls the artificial limb model to stably extend to the spherical object, the artificial limb hand is kept in an open state in the extending process, and when the artificial limb is close to the object, the artificial limb stops extending and accurately performs pre-grabbing actions. Then stopping the movement when the pre-grabbing action is kept to extend to the object, slightly touching the object, grabbing the object, and keeping the grabbing posture for 3s. Then, the user is required to lift the object to a preset height in this posture to ensure that the posture can stably grip the object, and the gripping posture is replaced again if the object cannot be stably lifted. After the object is grabbed, the user puts the object back to the original position of the object placing area, the artificial limb opens smoothly to release the object, and finally the object is recovered. The system records the time of each successful grabbing process and presents it on the screen in table 3. The user can judge the self training effect according to each time of successful grabbing time, the whole grabbing process is shown in fig. 9, and fig. 9 is a schematic diagram of the grabbing process.
TABLE 3 present time grip training schedule
Extend to Pre-grabbing Grabbing Lifting article Put back Release of Recovery of General procedure
1s 1s 4s 3s 3s 1s 1s 14s
When the time of the user for 3 continuous successful grabbing is within 12s, the user completes the grabbing task. And then carrying out the next grabbing task, and ending the artificial limb training course when the user finishes all grabbing tasks.
The accuracy of completing the target action by the user is determined according to the duration and the times, specifically, the accuracy of completing the target action by the user can be determined by obtaining the ratio of the times of completing the grabbing successfully in the preset duration to the total times, wherein the total times are determined by updating the times of completing the training action determined by the reference action information corresponding to the current training course, and the times of completing the grabbing successfully are determined by updating the times of completing the grabbing successfully when the duration is less than or equal to the reference duration set by grabbing once, such as 12s, without limitation.
In the technical scheme disclosed in this embodiment, the duration of the training action determined by the reference action information corresponding to the target action instruction given by the training course is defined by the actual action information through the actual action information and the reference action information corresponding to the current training course, so that the accuracy rate of the user completing the target action within the preset duration is determined according to the duration and the times of the training action, the accuracy training of the user on the training action of the training course is evaluated, the training effect of quantifying the user training is achieved, and the quality and the effect of the user training are further improved.
The invention also proposes a terminal device comprising: comprising a memory, a processor, and a prosthesis control training program stored in the memory and executable on the processor, which when executed by the processor performs the steps of the prosthesis control training method of any of the embodiments described above.
The invention also proposes a computer readable storage medium having stored thereon a training program for a prosthetic control, which when executed by a processor implements the steps of the training method for a prosthetic control according to any of the embodiments above.
The embodiments of the terminal device and the computer readable storage medium provided by the invention include all technical features of each embodiment of the training method of the artificial limb control, and the expansion and explanation contents of the description are basically the same as those of each embodiment of the training method of the artificial limb control, and are not repeated herein.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present invention are merely for the purpose of description, and do not represent the advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as above, comprising instructions for causing a mobile terminal (which may be a handset, a computer, a server, a controlled terminal, or a network device, etc.) to perform the method of each embodiment of the present invention.
The foregoing description is only of the preferred embodiments of the present invention, and is not intended to limit the scope of the invention, but rather is intended to cover any equivalents of the structures or equivalent processes disclosed herein or in the alternative, which may be employed directly or indirectly in other related arts.

Claims (8)

1. A method of training a prosthetic control, the method comprising:
acquiring actual nerve electricity and muscle electrical signals when a user performs actions based on a target action instruction given by a current training course;
determining actual motion information generated by the artificial limb model during the motion of the user according to the actual nerve electricity and the muscle electricity signals;
determining the accuracy of the user completing the target action according to the actual action information and the reference action information corresponding to the target action instruction given by the current training course;
the determining the accuracy of the user completing the target action according to the actual action information and the reference action information corresponding to the target action instruction given by the current training course comprises the following steps:
acquiring sub-nerve electric signals and sub-muscle electric signals of all channels corresponding to the actual nerve electric signals and the muscle electric signals;
obtaining sub-standard action signals of all channels corresponding to the standard action signals of the current training course;
comparing the sub-nerve electrical and sub-muscle electrical signals and the sub-reference motion signals of the channels based on each identical channel to determine a signal comparison result of each channel, and/or determining a correlation coefficient of each channel based on the sub-nerve electrical and sub-muscle electrical signals and the sub-reference motion signals of the channels based on each identical channel;
Obtaining a signal comparison result according to the signal comparison result of each channel and/or the correlation coefficient of each channel;
and determining the accuracy of the user to finish the target action according to the actual action information, the reference action information corresponding to the current training course and the signal comparison result.
2. The prosthesis controlled training method according to claim 1, wherein said step of determining actual motion information generated when said user acts from said actual nerve electrical and muscle electrical signals comprises:
determining a corresponding action control instruction when the user acts according to the actual nerve electricity and the muscle electricity signals;
acquiring motion control parameters corresponding to the motion control instructions;
and controlling the motion of the artificial limb model according to the motion control parameters so as to determine actual motion information generated by the artificial limb model when the user acts.
3. The method for training prosthetic control according to claim 1, wherein after the step of determining the accuracy of the target action performed by the user based on the actual action information and the reference action information corresponding to the target action instruction given by the current training course, the method comprises:
And determining the course to be trained of the user according to the accuracy of the target action completed by the user.
4. The prosthetic control training method according to claim 3, wherein the step of determining the course to be trained for the user based on the accuracy with which the user completes the target action comprises:
when the accuracy is greater than or equal to a preset accuracy, determining the next training course adjacent to the current training course as a course to be trained of the user;
and when the accuracy is smaller than the preset accuracy, determining the course to be trained of the user as the current training course.
5. The method for training prosthetic control of claim 1, wherein the step of determining actual motion information generated by a prosthetic model during the user's motion based on the actual nerve electrical and muscle electrical signals comprises:
determining actual action information generated by the artificial limb model when the user acts within a preset duration according to the actual nerve electricity and the muscle electricity signals;
the step of determining the accuracy of the user completing the target action according to the actual action information and the reference action information corresponding to the target action instruction given by the current training course comprises the following steps:
Determining the number of times of completion of a target action and the total number of times of actions when the artificial limb model acts within a preset duration according to the actual action information and the reference action information corresponding to the target action instruction given by the current training course, wherein the target action is a training action determined by the reference action information corresponding to the current training course;
and determining the accuracy of the artificial limb model in the action within a preset duration according to the completion times and the total times so as to obtain the accuracy of the user in completing the target action.
6. The method for training prosthetic control of claim 1, wherein the step of determining actual motion information generated by a prosthetic model during the user's motion based on the actual nerve electrical and muscle electrical signals comprises:
determining actual action information generated by the artificial limb model when the user acts within a preset duration according to the actual nerve electricity and the muscle electricity signals;
the step of determining the accuracy of the user completing the target action according to the actual action information and the reference action information corresponding to the target action instruction given by the current training course comprises the following steps:
Determining the duration of the training action determined by the reference action information corresponding to the current training course according to the action information and the reference action information corresponding to the target action instruction given by the current training course, and updating the number of times of the training action determined by the reference action information corresponding to the current training course;
and determining the accuracy of the user completing the target action according to the duration and the times.
7. A terminal device, characterized in that the terminal device comprises: memory, a processor and a prosthesis control training program stored in the memory and executable on the processor, which when executed by the processor, performs the steps of the prosthesis control training method according to any one of claims 1 to 6.
8. A computer readable storage medium, characterized in that it has stored thereon a training program for a prosthesis control, which, when executed by a processor, implements the steps of the training method for a prosthesis control according to any one of claims 1-6.
CN202111410424.1A 2021-11-19 2021-11-19 Training method for artificial limb control, terminal equipment and computer readable storage medium Active CN114224483B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111410424.1A CN114224483B (en) 2021-11-19 2021-11-19 Training method for artificial limb control, terminal equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111410424.1A CN114224483B (en) 2021-11-19 2021-11-19 Training method for artificial limb control, terminal equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN114224483A CN114224483A (en) 2022-03-25
CN114224483B true CN114224483B (en) 2024-04-09

Family

ID=80751098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111410424.1A Active CN114224483B (en) 2021-11-19 2021-11-19 Training method for artificial limb control, terminal equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN114224483B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204049844U (en) * 2014-04-10 2014-12-31 深圳桑菲消费通信有限公司 A kind of convalescence device
CN110720908A (en) * 2018-07-17 2020-01-24 广州科安康复专用设备有限公司 Muscle injury rehabilitation training system based on vision-myoelectricity biofeedback and rehabilitation training method applying same
CN111317600A (en) * 2018-12-13 2020-06-23 深圳先进技术研究院 Artificial limb control method, device, system, equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10796599B2 (en) * 2017-04-14 2020-10-06 Rehabilitation Institute Of Chicago Prosthetic virtual reality training interface and related methods

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN204049844U (en) * 2014-04-10 2014-12-31 深圳桑菲消费通信有限公司 A kind of convalescence device
CN110720908A (en) * 2018-07-17 2020-01-24 广州科安康复专用设备有限公司 Muscle injury rehabilitation training system based on vision-myoelectricity biofeedback and rehabilitation training method applying same
CN111317600A (en) * 2018-12-13 2020-06-23 深圳先进技术研究院 Artificial limb control method, device, system, equipment and storage medium

Also Published As

Publication number Publication date
CN114224483A (en) 2022-03-25

Similar Documents

Publication Publication Date Title
CN111902077B (en) Calibration technique for hand state representation modeling using neuromuscular signals
CN110337269B (en) Method and apparatus for inferring user intent based on neuromuscular signals
US20190228591A1 (en) Visualization of reconstructed handstate information
Wang et al. sEMG-based continuous estimation of grasp movements by long-short term memory network
US7630521B2 (en) Biometric identification apparatus and method using bio signals and artificial neural network
Zhang et al. Fuzzy inference system based automatic Brunnstrom stage classification for upper-extremity rehabilitation
Jerde et al. Biological constraints simplify the recognition of hand shapes
Ma et al. A bi-directional LSTM network for estimating continuous upper limb movement from surface electromyography
Hamaya et al. Design of physical user–robot interactions for model identification of soft actuators on exoskeleton robots
Martinez-Martin et al. A socially assistive robot for elderly exercise promotion
Stival et al. Online subject-independent modeling of semg signals for the motion of a single robot joint
Franzke et al. Exploring the relationship between EMG feature space characteristics and control performance in machine learning myoelectric control
JP4078419B2 (en) pointing device
CN111401166A (en) Robust gesture recognition method based on electromyographic information decoding
Yang et al. Simultaneous prediction of wrist and hand motions via wearable ultrasound sensing for natural control of hand prostheses
KR100994408B1 (en) Method and device for deducting pinch force, method and device for discriminating muscle to deduct pinch force
CN114224483B (en) Training method for artificial limb control, terminal equipment and computer readable storage medium
Powar et al. Dynamic time warping for reducing the effect of force variation on myoelectric control of hand prostheses
Lu et al. Time series modeling of surface EMG based hand manipulation identification via expectation maximization algorithm
Koch et al. Inhomogeneously stacked rnn for recognizing hand gestures from magnetometer data
Karniel et al. A Turing-like handshake test for motor intelligence
CN113657243B (en) Test method for performance influence of non-contact bionic remote control gesture
Pareek et al. MyoTrack: Tracking subject participation in robotic rehabilitation using sEMG and IMU
Guo et al. A novel fuzzy neural network-based rehabilitation stage classifying method for the upper limb rehabilitation robotic system
Guo et al. Long short term memory model based continuous estimation of human finger joint angles

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant