CN111631923A - Neural network control system of exoskeleton robot based on intention recognition - Google Patents

Neural network control system of exoskeleton robot based on intention recognition Download PDF

Info

Publication number
CN111631923A
CN111631923A CN202010490739.0A CN202010490739A CN111631923A CN 111631923 A CN111631923 A CN 111631923A CN 202010490739 A CN202010490739 A CN 202010490739A CN 111631923 A CN111631923 A CN 111631923A
Authority
CN
China
Prior art keywords
wearer
exoskeleton
neural network
intention
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010490739.0A
Other languages
Chinese (zh)
Inventor
李智军
高洪波
郝正源
阚震
皮明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Institute of Advanced Technology University of Science and Technology of China
Original Assignee
Institute of Advanced Technology University of Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Institute of Advanced Technology University of Science and Technology of China filed Critical Institute of Advanced Technology University of Science and Technology of China
Priority to CN202010490739.0A priority Critical patent/CN111631923A/en
Publication of CN111631923A publication Critical patent/CN111631923A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/112Gait analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J19/00Accessories fitted to manipulators, e.g. for monitoring, for viewing; Safety devices combined with or specially adapted for use in connection with manipulators
    • B25J19/02Sensing devices
    • B25J19/021Optical sensing devices
    • B25J19/023Optical sensing devices including video camera means
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/0006Exoskeletons, i.e. resembling a human figure
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/08Programme-controlled manipulators characterised by modular constructions
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1679Programme controls characterised by the tasks executed
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H2003/005Appliances for aiding patients or disabled persons to walk about with knee, leg or stump rests
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • A61H2003/007Appliances for aiding patients or disabled persons to walk about secured to the patient, e.g. with belts

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Mechanical Engineering (AREA)
  • Robotics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Veterinary Medicine (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Dentistry (AREA)
  • Pathology (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Pain & Pain Management (AREA)
  • Physiology (AREA)
  • Epidemiology (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Biophysics (AREA)
  • Rehabilitation Therapy (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Surgery (AREA)
  • Multimedia (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Rehabilitation Tools (AREA)

Abstract

The invention provides a neural network control system of an exoskeleton robot based on intention recognition, which comprises the following components: an intention classification module: classifying the wearer's intent; an action and track planning module: setting rehabilitation training actions of a patient, collecting lower limb electromyographic signals of a human body, collecting gait data of the human body during walking, and planning a motion track in a joint space of the lower limb exoskeleton robot; the exoskeleton state judgment module: and acquiring the state information of the exoskeleton, processing and analyzing the state information, and acquiring the current state information of the exoskeleton. The invention solves the system unknown problem in the calculation torque control based on the model control, and solves the real-time property of the control and the robustness of the control when different states are converted. The method avoids the wearer from learning a complex exoskeleton operation method, enables the rehabilitation process to be more suitable for the patient, enhances the man-machine interaction, and can effectively improve the rehabilitation effect.

Description

Neural network control system of exoskeleton robot based on intention recognition
Technical Field
The invention relates to the field of intention recognition and control of lower limb exoskeleton robots for rehabilitation, in particular to a neural network control system of an exoskeleton robot based on intention recognition. And more particularly, to a neural network control system for a rehabilitating lower extremity exoskeleton robot based on wearer intent recognition. Further, the invention relates to a hidden Markov model and Bayesian network-based wearer intention identification method and a radial basis function neural network-based lower limb exoskeleton control method.
Background
At present, the number of various disabled people in China exceeds 8000 ten thousand, and the old people with the life capacity reduced account for 14.3 percent of the population in China. With up to 300 more than ten thousand patients suffering from motor dysfunction due to parkinson, stroke, craniocerebral trauma, etc. each year. Also, the population with disabilities due to lower limb injuries is a major part of the population with disabilities or even disabilities. The sick gait, the joint stiffness, the muscular atrophy and other problems of the patient bring serious inconvenience to the life of the patient, and the secondary injury is easily caused by falling. In studies, it was demonstrated that motor disorders caused by central nerve damage can be recovered by rehabilitation training. For the traditional rehabilitation training, a rehabilitation trainer usually uses an electronic auxiliary wheelchair, an assisted prosthesis and the like to perform rehabilitation treatment on a patient. But because it relies mainly on physical therapists, the treatment cycle is long and the physical therapists are expensive, the rehabilitation effect depends to a large extent on the level of the trainers. The lower limb exoskeleton for rehabilitation can improve the life quality of patients losing leg walking ability, realize system-assisted walking or recovery of other lost motions due to diseases or accidental injuries, and reduce the number of required therapists. Furthermore, the use of exoskeletons allows training to be more uniform, retrospective analysis to be easier, and can be specifically tailored to each patient to safely restore human motor function by providing active muscle activation in task-oriented and repetitive gait therapy. Therefore, a method of assisting the lower limb rehabilitation of the human body by the lower limb exoskeleton is gradually popularized.
At present, the lower limb exoskeleton for rehabilitation mainly depends on a planned track, and a simple and reliable method for identifying the movement intention of a wearer is lacked. The lower limb exoskeleton for rehabilitation can help a wearer to perform rehabilitation training through a trained track only by recognizing the action intention of the wearer. The intention of detecting a human is divided into two types, one is by implanting a sensor in a human body, but this method is not only expensive but also whether the human body is injured or not is not currently determined, so that this method cannot be popularized. Another way is to measure outside the human body by various indirect methods. Patent application No. CN201210355661 mentions an electroencephalogram cap, which determines the action intention of an exoskeleton robot wearer by examining brain excitation points through a plurality of dry electrodes attached to the outside of the brain. However, it is not mentioned how the brain excitation point reflects the human movement idea specifically, and at present, the method for acquiring and identifying the wearer intention by the electroencephalogram signal is not mature and complicated to operate. There is a need for a simple and convenient method of identifying the wearer's motor intention.
Disclosure of Invention
In view of the shortcomings in the prior art, it is an object of the present invention to provide a neural network control system for exoskeleton robots based on intent recognition.
According to the invention, the neural network control system of the exoskeleton robot based on intention recognition comprises:
an intention classification module: classifying the wearer's intent;
an action and track planning module: setting rehabilitation training actions of a patient, collecting lower limb electromyographic signals of a human body, collecting gait data of the human body during walking, and planning a motion track in a joint space of the lower limb exoskeleton robot;
the exoskeleton state judgment module: acquiring state information of the exoskeleton, processing and analyzing the state information to obtain current state information of the exoskeleton;
intention information acquisition module: collecting facial features of an exoskeleton wearer to obtain movement intention information of the wearer;
an intent recognition module: analyzing the movement intention of the wearer according to the collected movement intention information of the wearer, and identifying the movement intention of the wearer;
an exoskeleton control module: and carrying out self-adaptive control on the next track through neural network control according to the identified movement intention of the wearer.
Preferably, said classifying the wearer's intention is: normal walking, stop walking, normal left turn, normal right turn, crossing obstacles.
Preferably, the action and trajectory planning module:
the rehabilitation training action of a patient is set, the electromyographic signals of the lower limbs of the human body are collected through the electromyograph, gait data of the human body during walking are collected through the gyroscope and the accelerometer in an auxiliary mode, and the movement track is planned in the joint space of the lower limb exoskeleton robot.
Preferably, the exoskeleton state determination module:
the exoskeleton robot control method comprises the steps of collecting torque values of a left leg hip joint, a left leg knee joint, a right leg hip joint and a right leg knee joint of an exoskeleton robot through a torque sensor, collecting angular velocity of each joint at present through an encoder, carrying out Kalman filtering on the collected torque values and angular velocity values, calculating current gait through positive kinematics, and identifying the current motion state of an exoskeleton, wherein the identification result comprises the following steps: current motion state, current walking speed.
Preferably, the intention information collection module:
the original head image of the head of the wearer, which is acquired by the camera, is transmitted to an image processor, the image processor preprocesses the original image, and the median filtering method is adopted to protect the edge of the image and remove noise;
the method comprises the steps of enhancing image details by a histogram equalization method, extracting features of a person by a Laplacian pyramid, identifying the position of the eye by the characteristics of the eye in the image, and identifying the center of the head by the semicircular features of the edge of the head.
Preferably, the intent recognition module:
after the positions of human eyes and the centers of heads are identified, the angle of head deflection is calculated, the turning angle of a wearer is obtained, according to the relation between the trained turning angle and turning action, when the turning of the wearer towards the same direction in a period of duration exceeds a preset number of times or the turning duration is longer than a preset duration and both exceed preset angles, the intention of the wearer to turn towards a certain direction is judged, then the intention of the wearer is identified through hidden Markov and Bayesian networks by combining the identified current motion state of the exoskeleton, the motion intention of the wearer is obtained, and a behavior database of the wearer is established, so that the corresponding preset motion track is adopted.
Preferably, the exoskeleton control module:
controlling the joint angle in the joint space by adopting a self-adaptive radial basis function neural network control method;
according to the different recognized actions, tracking different expected angles by using an adaptive neural network, thereby realizing smooth conversion of the actions;
an uncertain item in a system model is approximated by an RBFNN network, so that the problem of system unknown items in the calculation torque control based on model control is solved;
the real-time performance of control and the robustness of control during different state transitions are solved through the adaptive neural network.
Compared with the prior art, the invention has the following beneficial effects:
1. according to the invention, the wearable camera is utilized to identify the movement intention of the lower limb exoskeleton wearer by acquiring the facial information of the lower limb exoskeleton wearer in the movement process. The inconvenience brought by other identification modes can be solved, and the adjustment can be carried out at any time.
2. According to the invention, the rotation angle of the head is obtained by performing geometric analysis on the image of the face of the wearer, and meanwhile, information such as joint moment, joint angle, joint angular acceleration and the like of the lower limb exoskeleton robot is collected, so that the motion state of the lower limb exoskeleton robot is identified as the basis of intention identification, the problem of error identification caused by abnormality of single information is solved, the identification accuracy is improved, and convenience is provided for exoskeleton control.
3. The invention identifies the collected head movement information of the wearer through the hidden Markov model and the two-layer structure of the naive Bayes, thereby effectively improving the identification accuracy and generalization performance.
4. The invention solves the problem of unknown system items in the calculation torque control based on the model control by using the radial basis function neural network control to the lower limb exoskeleton robot and adopting the RBFNN network to approach the uncertain items in the system model. The real-time performance of control and the robustness of control during different state transitions are solved through the adaptive neural network.
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments with reference to the following drawings:
FIG. 1 is a block diagram of the present invention.
Fig. 2 is a schematic view of the wearing manner of the camera of the wearer in the present invention.
Fig. 3 is a front direct view illustration of a wearer in the present invention.
Fig. 4 is a schematic view of the angle of the head of a wearer in the present invention.
FIG. 5 is a flow chart of the intention recognition process of the present invention.
Fig. 6 is a schematic diagram of a lower extremity exoskeleton useful in the present invention.
Fig. 7 is a schematic diagram of a lower extremity exoskeleton modeling coordinate system in accordance with the present invention.
FIG. 8 is a block diagram of the hidden Markov model and Bayesian method for two-level recognition in the present invention.
FIG. 9 is a block diagram illustrating the structure of a hidden Markov model according to the present invention.
The figures show that:
1 is a cap worn by a lower limb exoskeleton wearer, 2 is a camera arranged on the cap, 3 is the head center of the lower limb exoskeleton wearer, and 4 is the binocular center of the lower limb exoskeleton wearer.
Detailed Description
The present invention will be described in detail with reference to specific examples. The following examples will assist those skilled in the art in further understanding the invention, but are not intended to limit the invention in any way. It should be noted that it would be obvious to those skilled in the art that various changes and modifications can be made without departing from the spirit of the invention. All falling within the scope of the present invention.
According to the invention, the neural network control system of the exoskeleton robot based on intention recognition comprises:
an intention classification module: classifying the wearer's intent;
an action and track planning module: setting rehabilitation training actions of a patient, collecting lower limb electromyographic signals of a human body, collecting gait data of the human body during walking, and planning a motion track in a joint space of the lower limb exoskeleton robot;
the exoskeleton state judgment module: acquiring state information of the exoskeleton, processing and analyzing the state information to obtain current state information of the exoskeleton;
intention information acquisition module: collecting facial features of an exoskeleton wearer to obtain movement intention information of the wearer;
an intent recognition module: analyzing the movement intention of the wearer according to the collected movement intention information of the wearer, and identifying the movement intention of the wearer;
an exoskeleton control module: and carrying out self-adaptive control on the next track through neural network control according to the identified movement intention of the wearer.
In particular, said classification of the wearer's intention is divided into: normal walking, stop walking, normal left turn, normal right turn, crossing obstacles.
Specifically, the action and trajectory planning module:
the rehabilitation training action of a patient is set, the electromyographic signals of the lower limbs of the human body are collected through the electromyograph, gait data of the human body during walking are collected through the gyroscope and the accelerometer in an auxiliary mode, and the movement track is planned in the joint space of the lower limb exoskeleton robot.
Specifically, the exoskeleton state determination module:
the exoskeleton robot control method comprises the steps of collecting torque values of a left leg hip joint, a left leg knee joint, a right leg hip joint and a right leg knee joint of an exoskeleton robot through a torque sensor, collecting angular velocity of each joint at present through an encoder, carrying out Kalman filtering on the collected torque values and angular velocity values, calculating current gait through positive kinematics, and identifying the current motion state of an exoskeleton, wherein the identification result comprises the following steps: current motion state, current walking speed.
Specifically, the intention information collection module:
the original head image of the head of the wearer, which is acquired by the camera, is transmitted to an image processor, the image processor preprocesses the original image, and the median filtering method is adopted to protect the edge of the image and remove noise;
the method comprises the steps of enhancing image details by a histogram equalization method, extracting features of a person by a Laplacian pyramid, identifying the position of the eye by the characteristics of the eye in the image, and identifying the center of the head by the semicircular features of the edge of the head.
Specifically, the intent recognition module:
after the positions of human eyes and the centers of heads are identified, the angle of head deflection is calculated, the turning angle of a wearer is obtained, according to the relation between the trained turning angle and turning action, when the turning of the wearer towards the same direction in a period of duration exceeds a preset number of times or the turning duration is longer than a preset duration and both exceed preset angles, the intention of the wearer to turn towards a certain direction is judged, then the intention of the wearer is identified through hidden Markov and Bayesian networks by combining the identified current motion state of the exoskeleton, the motion intention of the wearer is obtained, and a behavior database of the wearer is established, so that the corresponding preset motion track is adopted.
Specifically, the exoskeleton control module:
controlling the joint angle in the joint space by adopting a self-adaptive radial basis function neural network control method;
according to the different recognized actions, tracking different expected angles by using an adaptive neural network, thereby realizing smooth conversion of the actions;
an uncertain item in a system model is approximated by an RBFNN network, so that the problem of system unknown items in the calculation torque control based on model control is solved;
the real-time performance of control and the robustness of control during different state transitions are solved through the adaptive neural network.
The present invention will be described more specifically below with reference to preferred examples.
Preferred example 1:
the invention relates to the field of lower limb exoskeleton robots, in particular to a neural network control system of a rehabilitation lower limb exoskeleton robot based on wearer intention identification. The method is designed to be convenient to wear and adjust by using the wearable camera; an image algorithm is designed instead of the current popular deep learning method to extract and identify the facial information of the wearer, so that the calculation amount is reduced; the design identifies the head rotation angle of the wearer and the motion state of the lower limb exoskeleton robot through the extracted features, so that the quantity of the data input by an intention identification network is increased, and the identification accuracy is improved; designing an identification method based on a hidden Markov model and a Bayesian method to identify the movement intention of the wearer, and judging whether the wearer needs to make left-turn, right-turn, straight-going, obstacle-crossing or stopping movement in the next step; the wearable lower limb exoskeleton robot is controlled by a self-adaptive radial basis function neural network control method, the problem of system unknowns in calculation moment control based on model control is solved, and the real-time performance of control and the robustness of control during different state conversion are solved. The method avoids the wearer from learning a complex exoskeleton operation method, enables the rehabilitation process to be more suitable for the patient, enhances the man-machine interaction, and can effectively improve the rehabilitation effect.
In order to achieve the above object, the method of the present invention comprises the steps of:
step 1: the wearer's intent is classified into: normal walking, stop walking, normal left turn, normal right turn, crossing obstacles.
And 2, setting the rehabilitation training action of the patient, acquiring lower limb electromyographic signals of the human body through an electromyograph, acquiring gait data of the human body during walking in an auxiliary mode through a gyroscope and an accelerometer, and planning a motion track in a joint space of the lower limb exoskeleton robot.
And 3, judging the current motion state of the exoskeleton. The moment sensors are used for acquiring the moment values of the hip joint, the knee joint, the hip joint and the knee joint of the left leg and the right leg of the exoskeleton robot, and the encoders are used for acquiring the angular velocity of each joint. And performing Kalman filtering on the acquired moment value and angular velocity value, calculating the current gait through positive kinematics, and identifying the current exoskeleton state, including the current action state and the current walking speed.
And 4, acquiring the movement intention information of the wearer. The facial behaviors of a wearer are collected through the camera, the original head image of the head of the wearer collected by the camera is transmitted to the image processor, the image processor preprocesses the original image, and the method for directly processing the image is used in consideration of the fact that the wearable robot does not have strong GPU resources and is difficult to process the image by using methods such as deep learning and the like. Protecting the edge of the image and removing noise by adopting a median filtering method; and enhancing image details by adopting a histogram equalization method. Extracting the features of the human by adopting a Laplacian pyramid, and according to the characteristics of the human eyes in the image: the eye position is identified by the part with the minimum gray scale, circular characteristic and symmetrical appearance, and the eye position is calculated by a connecting line of the centers of the eyes and the center of the head. The center of the head is then identified by the semi-circular nature of the head edge, and the angle of head deflection is then calculated.
And 5, according to the head deflection angle calculated in the step 4 and the relation between the trained corner angle and the steering action, when the wearer turns around for 5 times in the same direction in a period of duration time or turns around for more than 2s and exceeds a certain angle, judging that the wearer has an intention to turn around in a certain direction, then combining the identified action state of the exoskeleton, identifying the intention of the wearer through a hidden Markov network and a Bayesian network to obtain the intention of the wearer, and establishing a behavior database of the wearer. So that a corresponding predetermined movement trajectory is adopted.
And 6, controlling the joint angle in the joint space by adopting a self-adaptive radial basis function neural network control method. And controlling according to the recognized different actions. Firstly, analyzing the control quantity tau required by control by using a dynamic model of the lower limb exoskeleton to obtain a control quantity F which cannot be accurately obtained in the control process, and then approximating an uncertain item in a system model by using an RBFNN network to solve the problem of unknown items of the system in the moment control calculation based on model control. By using an adaptive neural network to track different desired angles, a smooth transition of motion is achieved. The real-time performance of control and the robustness of control during different state transitions are solved through the adaptive neural network.
Preferred example 2:
the technical solutions in the embodiments of the present invention will be described in detail below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. It should be noted that, for those skilled in the art, various changes and modifications can be made in the above-described embodiments without departing from the principles of the present invention, or equivalent substitutions can be made in some of them. Such modifications, adaptations, and alternatives are not to be construed as limiting the scope of the various embodiments of the invention, but are to be construed within the scope of the appended claims.
In a specific embodiment, the bottom layer of a control computer system of the lower limb exoskeleton robot used by the user has the function of automatic balancing, and the power mechanism can be controlled to take corresponding actions to maintain a wearer to stand in a balanced manner on various terrains according to the states of the sensors without taking any action by the lower limbs of the wearer. The movement intentions of the wearer of the lower extremity exoskeleton are divided into: normal walking, stop walking, normal left turn, normal right turn, crossing obstacles. Fig. 1 is a block diagram of the overall intent-to-recognize control lower extremity exoskeleton system. The main idea of the invention is introduced on the whole, firstly, data acquisition and preprocessing are carried out to obtain a required behavior data set of a wearer and a state data set of a lower limb exoskeleton, and then a hidden Markov model of a first layer and a Bayesian model of a second layer are trained simultaneously. And importing the data corresponding to the confusable intention in the first layer into a Bayes model of the second layer, and obtaining the intention of the final wearer according to the obtained probability maximum value.
Referring to fig. 2, the camera is connected with the image processor by a serial port, the camera is fixed on a hat, the hat is worn on the head of a wearer, the included angle between the main optical axis of the camera and the horizontal plane is 30-60 degrees, and the camera lens faces the wearer; the camera collects an original image of the face of the wearer and transmits the original image to the image processor through a video line; cameras generally require a dynamic range of more than 110dB and pixel values of more than 300 ten thousand. The image processor is connected with the intention identification processor and is used for carrying out image processing on the head image of the wearer, calculating the head deflection angle of the wearer, identifying that Pitch is a Pitch angle, Roll is a rotation angle and Yaw is a Yaw angle as shown in figure 4, and obtaining the Yaw angle and the Pitch angle is only required. And passes the results of the calculations to the intent recognition processor. The intention recognition processor is connected with the lower limb exoskeleton robot through an Ethernet bus; the intention recognition processor acquires information such as current, moment, rotation angle, angular velocity and angular acceleration of the lower limb exoskeleton through the Ethernet bus, processes the information and acquires the current motion state of the lower limb exoskeleton robot, including the current motion form, the current motion speed and the like.
Referring to fig. 5, the wearer's head information is sent to the intention recognition processor via a serial port. The intention recognition processor acquires signals such as the current speed and the motion state of the exoskeleton through the Ethernet bus, calculates and analyzes the intentions of normal walking, stopping walking, normal left turning, normal right turning and obstacle crossing of the lower limb exoskeleton wearer in real time, and sends recognition intention results to the exoskeleton Ethernet bus. The sensing device collects vision-based wearer intention information and exoskeleton movement information to perform comprehensive analysis to identify the wearer movement intention, and accuracy of wearer intention identification is improved through fusion judgment of various information; meanwhile, the intention information of the wearer can be sent to the exoskeleton bus, and the intention information of the wearer is provided for the radial basis function neural network controller; the consistency of the information is ensured.
Referring to fig. 2 to 9, a neural network control method for a rehabilitation lower limb exoskeleton robot based on wearer intention recognition specifically includes the following steps:
step 1, setting rehabilitation training actions of a patient, collecting lower limb electromyographic signals of a human body through an electromyograph, collecting gait data of the human body during walking in an auxiliary mode through a gyroscope and an accelerometer, and planning a motion track in a joint space of a lower limb exoskeleton robot.
And 2, judging the current motion state of the exoskeleton. The moment sensors are used for acquiring the moment values of the hip joint, the knee joint, the hip joint and the knee joint of the left leg and the right leg of the exoskeleton robot, and the encoders are used for acquiring the angular velocity of each joint. And performing Kalman filtering on the acquired moment value and angular velocity value, calculating the current gait through positive kinematics, and identifying the current exoskeleton state, including the current action state and the current walking speed. Referring to fig. 7, a transformation matrix between coordinate systems can be obtained by positive kinematic modeling of the lower extremity exoskeleton.
And 3, acquiring the movement intention information of the wearer. The facial behaviors of a wearer are collected through the camera, the original head image of the head of the wearer collected by the camera is transmitted to the image processor, the image processor preprocesses the original image, and the method for directly processing the image is used in consideration of the fact that the wearable robot does not have strong computing resources and is difficult to process the image by using methods such as deep learning and the like. Protecting the edge of the image and removing noise by adopting a median filtering method; and enhancing image details by adopting a histogram equalization method. Extracting the features of the human by adopting a Laplacian pyramid, and according to the characteristics of the human eyes in the image: the smallest gray, circular feature and symmetrically appearing part identifies the position of the human eye. The specific detection steps are as follows
a) Divide the image into two parts, which are respectively marked as g (x)1,y1),p(x2,y2)。g(x1,y1) And p (x)2,y2) Representing the left and right sides of the face symmetry, respectively.
b) Judging the eye position by judging the circular area, and calculating the expression of the kth circle for the kth circle to be judged:
Figure BDA0002520987640000091
where ω is a weight, 0 < ω < 1, NkIs the total number of pixels on the kth circle, SkIs the sum of the gradient values of the pixels on the kth circle, MkIs the total number of pixels in the circle to be measured, HkIs the sum of the gray values of the pixels in the k-th circle.
c) For two images g (x)1,y1),p(x2,y2) Respectively carry into formula (1) to FkThe circle with the largest value is determined as the pupil circle, and the remaining circles are deleted.
The center of the head is then identified by the semicircular feature of the edge of the head, and the angle of head deflection can then be calculated by the line connecting the center of the eyes and the center of the head, as shown in fig. 3.
Step 4, identifying the movement intention of the wearer: according to the method described in step 3, after the eye center and the head center are identified, the angle of head deflection is calculated to obtain the angle of turn of the wearer. The calculation formula of the turning angle theta is as follows:
θ=arctan[(Y1-Y2)/(X1-X2)]*β (2)
wherein X1、Y1Is the position of the pixel at which the center of the head is located, X2、Y2According to the relationship between the trained corner angle and the steering action, when the wearer turns around for 5 times in the same direction in a period of duration or turns around for more than 2s and exceeds a certain angle, the intention of the wearer to steer in a certain direction is judged, then the intention of the wearer is identified through hidden Markov and Bayesian networks by combining the identified action state of the exoskeleton, referring to fig. 8 and 9, the intention of the wearer is obtained, and a wearer behavior database is established, so that a corresponding preset motion track is adopted.
a) Construction of hidden Markov models
Hidden markov models include two sets of state sets and three sets of probability sets, which may be denoted as λ ═ N, M, a, B, and pi, where N represents the number of implicit states and S ═ S1, S2, …, and sN is the state set. M represents the number that can be observed, the set V of combinations of observed values, A represents the state transition probability; b represents the probability of the occurrence of the observed value V in state S, and pi represents the initial state distribution probability. And inputting the processed data into a first layer of hidden Markov model, and preliminarily classifying the intention of the wearer through the obtained maximum likelihood estimation group of the wearer.
b) Construction of Bayesian model
And inputting the sequence type corresponding to the confusable wearer intention output by the first layer of hidden Markov model into the second layer of Bayesian network, and finally judging the behavior intention of the wearer by calculating the maximum probability value. The formula of the Bayesian model is as follows:
P(A|B)=P(B|A)P(A)/P(B) (3)
wherein, P (a | B) refers to the probability of occurrence of event a (conditional probability) when event B occurs; p (a | B) is the conditional probability of a after B is known to occur, and is also referred to as the a posteriori probability of a due to the knowledge of the value of B; p (a) is the prior probability (or edge probability) of a. It is called a priori because it does not take into account any B-aspect factors; p (B | A) is the conditional probability of B after A is known to occur, and is also called the posterior probability of B due to the fact that the value of A is known; p (B) is the prior probability (or edge probability) of B. What we need to get is the maximum probability value and then determine the wearer's behavioral intent.
And 5, controlling the joint angle in the joint space by adopting a self-adaptive radial basis function neural network control method. And controlling according to the recognized different actions. Firstly, analyzing the control quantity tau required by control by using a dynamic model of the lower limb exoskeleton to obtain a control quantity F which cannot be accurately obtained in the control process, and then approximating an uncertain item in a system model by using an RBFNN network to solve the problem of unknown items of the system in the moment control calculation based on model control. By using an adaptive neural network to track different desired angles, a smooth transition of motion is achieved. The real-time performance of control and the robustness of control during different state transitions are solved through the adaptive neural network.
The kinetic model of the exoskeleton was:
Figure BDA0002520987640000101
wherein,
m (q) is a positive definite inertial matrix,
Figure BDA0002520987640000102
is an inertia matrix, G (q) is a gravity matrix, fdisIs the friction and unknown interference terms, tau is the control input, q is the joint vector,
Figure BDA0002520987640000103
of qA derivative;
approximating the unknown items through an adaptive neural network, wherein the output of the neural network is as follows:
Figure BDA0002520987640000111
wherein, W is a neural network parameter matrix,
Figure BDA0002520987640000112
inputting a neural network;
the control rate is as follows:
Figure BDA0002520987640000113
wherein
Figure BDA0002520987640000114
e is the system error, omega is the robust term to overcome the approximation error of the neural network,
Figure BDA0002520987640000115
is the derivative of the systematic error.
In the above, the neural network control method for the rehabilitation lower limb exoskeleton robot based on the wearer intention recognition is described in detail. The movement intention of the wearer is acquired and analyzed through the camera, the movement intention of the wearer is transmitted to the exoskeleton robot, and then the exoskeleton robot is subjected to self-adaptive control through a self-adaptive radial basis function neural network control method. The control effect of the exoskeleton robot can be improved to a great extent, so that the rehabilitation effect of the lower limb exoskeleton robot is improved.
In the description of the present application, it is to be understood that the terms "upper", "lower", "front", "rear", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like indicate orientations or positional relationships based on those shown in the drawings, and are only for convenience in describing the present application and simplifying the description, but do not indicate or imply that the referred device or element must have a specific orientation, be constructed in a specific orientation, and be operated, and thus, should not be construed as limiting the present application.
Those skilled in the art will appreciate that, in addition to implementing the systems, apparatus, and various modules thereof provided by the present invention in purely computer readable program code, the same procedures can be implemented entirely by logically programming method steps such that the systems, apparatus, and various modules thereof are provided in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Therefore, the system, the device and the modules thereof provided by the present invention can be considered as a hardware component, and the modules included in the system, the device and the modules thereof for implementing various programs can also be considered as structures in the hardware component; modules for performing various functions may also be considered to be both software programs for performing the methods and structures within hardware components.
The foregoing description of specific embodiments of the present invention has been presented. It is to be understood that the present invention is not limited to the specific embodiments described above, and that various changes or modifications may be made by one skilled in the art within the scope of the appended claims without departing from the spirit of the invention. The embodiments and features of the embodiments of the present application may be combined with each other arbitrarily without conflict.

Claims (7)

1. A neural network control system for an exoskeleton robot based on intent recognition, comprising:
an intention classification module: classifying the wearer's intent;
an action and track planning module: setting rehabilitation training actions of a patient, collecting lower limb electromyographic signals of a human body, collecting gait data of the human body during walking, and planning a motion track in a joint space of the lower limb exoskeleton robot;
the exoskeleton state judgment module: acquiring state information of the exoskeleton, processing and analyzing the state information to obtain current state information of the exoskeleton;
intention information acquisition module: collecting facial features of an exoskeleton wearer to obtain movement intention information of the wearer;
an intent recognition module: analyzing the movement intention of the wearer according to the collected movement intention information of the wearer, and identifying the movement intention of the wearer;
an exoskeleton control module: and carrying out self-adaptive control on the next track through neural network control according to the identified movement intention of the wearer.
2. The intent recognition based neural network control system for an exoskeletal robot of claim 1, wherein the intent of the wearer is classified as: normal walking, stop walking, normal left turn, normal right turn, crossing obstacles.
3. The neural network control system for an exoskeleton robot based on intent recognition of claim 1 wherein the action and trajectory planning module:
the rehabilitation training action of a patient is set, the electromyographic signals of the lower limbs of the human body are collected through the electromyograph, gait data of the human body during walking are collected through the gyroscope and the accelerometer in an auxiliary mode, and the movement track is planned in the joint space of the lower limb exoskeleton robot.
4. The neural network control system for an exoskeleton robot based on intent recognition of claim 1 wherein the exoskeleton state determination module:
the exoskeleton robot control method comprises the steps of collecting torque values of a left leg hip joint, a left leg knee joint, a right leg hip joint and a right leg knee joint of an exoskeleton robot through a torque sensor, collecting angular velocity of each joint at present through an encoder, carrying out Kalman filtering on the collected torque values and angular velocity values, calculating current gait through positive kinematics, and identifying the current motion state of an exoskeleton, wherein the identification result comprises the following steps: current motion state, current walking speed.
5. The intent recognition based exoskeleton robot neural network control system of claim 1 wherein said intent information collection module:
the original head image of the head of the wearer, which is acquired by the camera, is transmitted to an image processor, the image processor preprocesses the original image, and the median filtering method is adopted to protect the edge of the image and remove noise;
the method comprises the steps of enhancing image details by a histogram equalization method, extracting features of a person by a Laplacian pyramid, identifying the position of the eye by the characteristics of the eye in the image, and identifying the center of the head by the semicircular features of the edge of the head.
6. The neural network control system for an exoskeleton robot based on intent recognition of claim 5, wherein the intent recognition module:
after the positions of human eyes and the centers of heads are identified, the angle of head deflection is calculated, the turning angle of a wearer is obtained, according to the relation between the trained turning angle and turning action, when the turning of the wearer towards the same direction in a period of duration exceeds a preset number of times or the turning duration is longer than a preset duration and both exceed preset angles, the intention of the wearer to turn towards a certain direction is judged, then the intention of the wearer is identified through hidden Markov and Bayesian networks by combining the identified current motion state of the exoskeleton, the motion intention of the wearer is obtained, and a behavior database of the wearer is established, so that the corresponding preset motion track is adopted.
7. The neural network control system for an exoskeleton robot based on intent recognition of claim 6 wherein the exoskeleton control module:
controlling the joint angle in the joint space by adopting a self-adaptive radial basis function neural network control method;
according to the different recognized actions, tracking different expected angles by using an adaptive neural network, thereby realizing smooth conversion of the actions;
an uncertain item in a system model is approximated by an RBFNN network, so that the problem of system unknown items in the calculation torque control based on model control is solved;
the real-time performance of control and the robustness of control during different state transitions are solved through the adaptive neural network.
CN202010490739.0A 2020-06-02 2020-06-02 Neural network control system of exoskeleton robot based on intention recognition Pending CN111631923A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010490739.0A CN111631923A (en) 2020-06-02 2020-06-02 Neural network control system of exoskeleton robot based on intention recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010490739.0A CN111631923A (en) 2020-06-02 2020-06-02 Neural network control system of exoskeleton robot based on intention recognition

Publications (1)

Publication Number Publication Date
CN111631923A true CN111631923A (en) 2020-09-08

Family

ID=72323811

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010490739.0A Pending CN111631923A (en) 2020-06-02 2020-06-02 Neural network control system of exoskeleton robot based on intention recognition

Country Status (1)

Country Link
CN (1) CN111631923A (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112336340A (en) * 2020-10-15 2021-02-09 宁波工业互联网研究院有限公司 Human body movement intention identification method of waist assistance exoskeleton robot
CN112494282A (en) * 2020-12-01 2021-03-16 天津理工大学 Exoskeleton main power parameter optimization method based on deep reinforcement learning
CN112546553A (en) * 2020-12-01 2021-03-26 河南工业大学 Auxiliary action learning system and method based on wearable equipment
CN112870028A (en) * 2021-01-21 2021-06-01 上海傅利叶智能科技有限公司 Method and device for recognizing walking intention of user, intelligent walking stick and auxiliary system
CN112947093A (en) * 2021-04-07 2021-06-11 长春工业大学 Robot dispersion robust tracking control method, system and storage medium
CN112932898A (en) * 2021-01-28 2021-06-11 东南大学 On-demand auxiliary rehabilitation robot training method based on Bayesian optimization
CN113081671A (en) * 2021-03-31 2021-07-09 东南大学 Method for improving on-demand auxiliary rehabilitation training participation degree based on Bayesian optimization
CN113143697A (en) * 2020-12-18 2021-07-23 深圳市迈步机器人科技有限公司 Control method and device for hip joint exoskeleton
CN113478462A (en) * 2021-07-08 2021-10-08 中国科学技术大学 Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal
CN113509349A (en) * 2021-04-12 2021-10-19 杭州风行医疗器械有限公司 Joint rehabilitation device and control method thereof
CN113681541A (en) * 2021-08-12 2021-11-23 杭州程天科技发展有限公司 Exoskeleton control system and method based on Internet of things
CN113952092A (en) * 2021-10-25 2022-01-21 长春理工大学 Control method and control system for lower limb rehabilitation robot
CN113995629A (en) * 2021-11-03 2022-02-01 中国科学技术大学先进技术研究院 Upper limb double-arm rehabilitation robot admittance control method and system based on mirror force field
CN114797007A (en) * 2022-04-02 2022-07-29 中国科学技术大学先进技术研究院 Wearable underwater exoskeleton robot for rehabilitation and use method thereof
WO2022174662A1 (en) * 2021-02-22 2022-08-25 中国科学院深圳先进技术研究院 Sharing control system and method for exoskeleton rehabilitation robot
CN114948609A (en) * 2022-04-12 2022-08-30 北京航空航天大学 Walking aid auxiliary device and method for paralytic

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150057186A (en) * 2013-11-18 2015-05-28 전남대학교산학협력단 A walker using EMG sensors and the controlling method thereof
CN108681396A (en) * 2018-04-28 2018-10-19 北京机械设备研究所 Man-machine interactive system and its method based on brain-myoelectricity bimodal nerve signal
CN109623835A (en) * 2018-12-05 2019-04-16 济南大学 Wheelchair arm-and-hand system based on multimodal information fusion
CN110141239A (en) * 2019-05-30 2019-08-20 东北大学 A kind of motion intention identification and installation method for lower limb exoskeleton
CN110303471A (en) * 2018-03-27 2019-10-08 清华大学 Assistance exoskeleton control system and control method
CN110653817A (en) * 2019-08-20 2020-01-07 南京航空航天大学 Exoskeleton robot power-assisted control system and method based on neural network

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150057186A (en) * 2013-11-18 2015-05-28 전남대학교산학협력단 A walker using EMG sensors and the controlling method thereof
CN110303471A (en) * 2018-03-27 2019-10-08 清华大学 Assistance exoskeleton control system and control method
CN108681396A (en) * 2018-04-28 2018-10-19 北京机械设备研究所 Man-machine interactive system and its method based on brain-myoelectricity bimodal nerve signal
CN109623835A (en) * 2018-12-05 2019-04-16 济南大学 Wheelchair arm-and-hand system based on multimodal information fusion
CN110141239A (en) * 2019-05-30 2019-08-20 东北大学 A kind of motion intention identification and installation method for lower limb exoskeleton
CN110653817A (en) * 2019-08-20 2020-01-07 南京航空航天大学 Exoskeleton robot power-assisted control system and method based on neural network

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112336340A (en) * 2020-10-15 2021-02-09 宁波工业互联网研究院有限公司 Human body movement intention identification method of waist assistance exoskeleton robot
CN112494282A (en) * 2020-12-01 2021-03-16 天津理工大学 Exoskeleton main power parameter optimization method based on deep reinforcement learning
CN112546553A (en) * 2020-12-01 2021-03-26 河南工业大学 Auxiliary action learning system and method based on wearable equipment
CN113143697B (en) * 2020-12-18 2022-03-08 深圳市迈步机器人科技有限公司 Control method and device for hip joint exoskeleton
CN113143697A (en) * 2020-12-18 2021-07-23 深圳市迈步机器人科技有限公司 Control method and device for hip joint exoskeleton
CN112870028A (en) * 2021-01-21 2021-06-01 上海傅利叶智能科技有限公司 Method and device for recognizing walking intention of user, intelligent walking stick and auxiliary system
CN112870028B (en) * 2021-01-21 2023-03-31 上海傅利叶智能科技有限公司 Method and device for recognizing walking intention of user, intelligent walking stick and auxiliary system
CN112932898A (en) * 2021-01-28 2021-06-11 东南大学 On-demand auxiliary rehabilitation robot training method based on Bayesian optimization
WO2022174662A1 (en) * 2021-02-22 2022-08-25 中国科学院深圳先进技术研究院 Sharing control system and method for exoskeleton rehabilitation robot
CN113081671A (en) * 2021-03-31 2021-07-09 东南大学 Method for improving on-demand auxiliary rehabilitation training participation degree based on Bayesian optimization
CN112947093A (en) * 2021-04-07 2021-06-11 长春工业大学 Robot dispersion robust tracking control method, system and storage medium
CN112947093B (en) * 2021-04-07 2023-05-05 长春工业大学 Distributed robust tracking control method, system and storage medium for robot
CN113509349A (en) * 2021-04-12 2021-10-19 杭州风行医疗器械有限公司 Joint rehabilitation device and control method thereof
CN113478462A (en) * 2021-07-08 2021-10-08 中国科学技术大学 Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal
CN113478462B (en) * 2021-07-08 2022-12-30 中国科学技术大学 Method and system for controlling intention assimilation of upper limb exoskeleton robot based on surface electromyogram signal
CN113681541A (en) * 2021-08-12 2021-11-23 杭州程天科技发展有限公司 Exoskeleton control system and method based on Internet of things
CN113952092A (en) * 2021-10-25 2022-01-21 长春理工大学 Control method and control system for lower limb rehabilitation robot
CN113995629A (en) * 2021-11-03 2022-02-01 中国科学技术大学先进技术研究院 Upper limb double-arm rehabilitation robot admittance control method and system based on mirror force field
CN113995629B (en) * 2021-11-03 2023-07-11 中国科学技术大学先进技术研究院 Mirror image force field-based upper limb double-arm rehabilitation robot admittance control method and system
CN114797007A (en) * 2022-04-02 2022-07-29 中国科学技术大学先进技术研究院 Wearable underwater exoskeleton robot for rehabilitation and use method thereof
CN114948609A (en) * 2022-04-12 2022-08-30 北京航空航天大学 Walking aid auxiliary device and method for paralytic

Similar Documents

Publication Publication Date Title
CN111631923A (en) Neural network control system of exoskeleton robot based on intention recognition
Chen et al. A novel gait pattern recognition method based on LSTM-CNN for lower limb exoskeleton
Paulo et al. ISR-AIWALKER: Robotic walker for intuitive and safe mobility assistance and gait analysis
Bijalwan et al. Heterogeneous computing model for post‐injury walking pattern restoration and postural stability rehabilitation exercise recognition
Huang et al. Posture estimation and human support using wearable sensors and walking-aid robot
CN104524742A (en) Cerebral palsy child rehabilitation training method based on Kinect sensor
Zhang et al. Unsupervised cross-subject adaptation for predicting human locomotion intent
CN113043248B (en) Transportation and assembly whole-body exoskeleton system based on multi-source sensor and control method
CN113143256B (en) Gait feature extraction method, lower limb evaluation and control method, device and medium
Tang et al. Wearable supernumerary robotic limb system using a hybrid control approach based on motor imagery and object detection
CN113066260B (en) Parkinson&#39;s disease early warning system based on daily behavior analysis
Liu et al. A muscle synergy-inspired method of detecting human movement intentions based on wearable sensor fusion
Chalvatzaki et al. User-adaptive human-robot formation control for an intelligent robotic walker using augmented human state estimation and pathological gait characterization
Han et al. Rehabilitation posture correction using deep neural network
Gong et al. BPNN-based real-time recognition of locomotion modes for an active pelvis orthosis with different assistive strategies
Amer et al. Wheelchair control system based eye gaze
Lu et al. A deep learning based end-to-end locomotion mode detection method for lower limb wearable robot control
Zheng et al. A GMM-DTW-based locomotion mode recognition method in lower limb exoskeleton
Hollinger et al. The influence of gait phase on predicting lower-limb joint angles
Goffredo et al. A neural tracking and motor control approach to improve rehabilitation of upper limb movements
Chalvatzaki et al. Comparative experimental validation of human gait tracking algorithms for an intelligent robotic rollator
Feng et al. Small-data-driven temporal convolutional capsule network for locomotion mode recognition of robotic prostheses
Paulo et al. Human gait pattern changes detection system: A multimodal vision-based and novelty detection learning approach
CN115416003A (en) On-demand auxiliary control method for lower limb exoskeleton of old people
Xu et al. Multi-sensor based human motion intention recognition algorithm for walking-aid robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200908