WO2023136122A1 - Appareil d'assistance au mouvement et procédé d'assistance au mouvement - Google Patents

Appareil d'assistance au mouvement et procédé d'assistance au mouvement Download PDF

Info

Publication number
WO2023136122A1
WO2023136122A1 PCT/JP2022/048003 JP2022048003W WO2023136122A1 WO 2023136122 A1 WO2023136122 A1 WO 2023136122A1 JP 2022048003 W JP2022048003 W JP 2022048003W WO 2023136122 A1 WO2023136122 A1 WO 2023136122A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion
user
sensor signal
policy
sensor
Prior art date
Application number
PCT/JP2022/048003
Other languages
English (en)
Japanese (ja)
Inventor
淳一朗 古川
淳 森本
Original Assignee
国立研究開発法人理化学研究所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 国立研究開発法人理化学研究所 filed Critical 国立研究開発法人理化学研究所
Publication of WO2023136122A1 publication Critical patent/WO2023136122A1/fr

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61HPHYSICAL THERAPY APPARATUS, e.g. DEVICES FOR LOCATING OR STIMULATING REFLEX POINTS IN THE BODY; ARTIFICIAL RESPIRATION; MASSAGE; BATHING DEVICES FOR SPECIAL THERAPEUTIC OR HYGIENIC PURPOSES OR SPECIFIC PARTS OF THE BODY
    • A61H3/00Appliances for aiding patients or disabled persons to walk about
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for

Definitions

  • the present invention relates to an operation support device and an operation support method that support user's actions.
  • Patent Document 1 An exoskeleton-type robot that assists human movement while physically interacting with the movement of an active human being has been promoted.
  • the regression method is a method of sequentially estimating joint motions to control a robot. Although this method allows the user to operate intuitively, the user must be aware of the operation, and the accuracy of operation depends on the user.
  • the discrimination method is a method of discriminating the user's motion intention and controlling the robot to perform the intended specific motion. The motion itself is a predetermined motion, but if the purpose of the user's motion matches the motion of the robot, the motion can be performed with accuracy determined by the robot.
  • the user's intended action is estimated using a discriminator from the myoelectric potential (EMG) signal generated when the user starts the action.
  • EMG myoelectric potential
  • This classifier can be generated using machine learning such as SVM (Support Vector Machine) and LDA (Linear Discriminant Analysis) (Non-Patent Document 1). Using myoelectric signals enables more accurate estimation than based on joint angles alone.
  • an exoskeleton unit including a joint, a driving unit that drives the joint, an acquisition unit that acquires a sensor signal representing a state of the joint and a state of a user, and from the sensor signal a determination unit that determines the motion intended by the user; a control unit that generates and outputs a command for driving the joint unit from the sensor signal using a policy according to the determination result of the discriminator; wherein, of the sensor signals obtained by performing the target motion and the motion other than the target motion a plurality of times, only a part of the sensor signal obtained by the target motion is labeled as a positive example, and the other sensor signals are labeled as positive examples.
  • the operation support device is characterized in that it is learned by PU learning using a learning data set in which sensor signals are not labeled.
  • the sensor signal may include a user's EMG signal.
  • the sensor signal may include the angles of the user's joints.
  • Another aspect of the present invention is a motion support method using an exoskeleton-type robot comprising an exoskeleton including a joint, and a driving unit for driving the joint, wherein the state of the joint and an acquisition step of acquiring a sensor signal representing a state of a user; a determination step of determining an action intended by the user from the sensor signal; and a control step of generating and outputting a command for driving the joint, wherein in the determining step, among sensor signals obtained by performing the target motion and motions other than the target motion a plurality of times, the target motion is detected.
  • the discrimination is performed by a discriminator learned by PU learning.
  • the exoskeleton part 10 is a device that supports the movement of the knee joint using pneumatic artificial muscle (PAM).
  • PAM pneumatic artificial muscle
  • 2A shows the appearance of the exoskeleton part 10
  • FIG. 2B is a diagram showing a schematic configuration of the exoskeleton part 10.
  • the exoskeleton part 10 includes thighs 201 , lower legs 202 and knees 203 . They are made of carbon resin and are lightweight.
  • a PAM 210 is included in the thigh 201, and by adjusting the flow rate of compressed air, it is possible to control the rotational torque of the knee 203 and assist the movement of the user's knee joint.
  • the control unit 30 controls the exoskeleton unit 10 by optimal control (iLQG) based on sensor signals obtained from the sensor group 40.
  • the control unit 30 includes an action determination unit 31 , a support policy switching unit 32 and a command output unit 33 .
  • the control unit 30 is configured by a computer including an arithmetic device (processor) and a storage device, and realizes these functions by the arithmetic device executing a program.
  • the support policy ⁇ other is a support policy for generating torque that cancels out the gravity of the exoskeleton section 10 .
  • the sensor group 40 includes a plurality of sensors that detect the state of the exoskeleton section 10 and the state of the user. Sensors that detect the state of the exoskeleton 10 include an angle sensor 41 that detects the angle of the joint and a torque sensor 42 that detects the amount of torque generated in the joint. Moreover, there are an EMG sensor 43 and a motion sensor 44 as sensors for detecting the state of the user. The EMG sensor 43 acquires EMG (electromyographic potential) signals of multiple parts of the user. In this embodiment, EMG signals of rectus femoris, biceps femoris, lateral head of gastrocnemius, and tibialis anterior are used.
  • Motion sensors 44 obtain orientation (rotation) and acceleration for one or more parts of the user.
  • the orientation and acceleration of the trunk portion are used.
  • the sensor information to be acquired is not limited to the above.
  • the information representing the state of the user other biological information such as electroencephalograms and heartbeats, a motion capture system, camera images, and the like may be used.
  • PU learning Positive and Unlabeled Learning
  • PU learning is used to make it possible to accurately discriminate target motions even when correct labels (positive or negative examples) are not attached to all learning data. Learn about instruments.
  • PU learning is a machine learning process that uses learning data in which only some positive examples are labeled and other data is not labeled. Labeled data are positive examples, while unlabeled data can be both positive and negative examples.
  • the function f is obtained as follows.
  • n the number of learning data.
  • Expression (5) treats labeled data as positive data with a weight of 1, and unlabeled data as positive data with a weight of ⁇ (z) and negative data with a weight of 1 ⁇ (z). It means that the expected value E(h) is obtained.
  • Equation (5) E(y) is expressed as follows.
  • n is the number of labeled learning data.
  • EMG signal of rectus femoris e 1 (t)
  • EMG signal of biceps femoris e 2 (t)
  • EMG signal of lateral head of gastrocnemius muscle e 3 (t)
  • EMG signal of tibialis anterior muscle e 4 (t)
  • Velocity of trunk ⁇ ' tr (t)
  • the EMG signal is from the EMG sensor 43
  • the knee joint angle is from the angle sensor (potentiometer) 41
  • the trunk velocity is from the motion sensor (gyro sensor) 44 attached to the chest of the subject
  • the hip joint angle is It is calculated and acquired from the values of the angle sensor (potentiometer) 41 and the motion sensor (gyro sensor) 44 .
  • the EMG signal e i (t) is data obtained by performing the following processing on the signal obtained from the sensor.
  • ⁇ i (t) is the full-wave rectified and low-pass filtered signal obtained from the sensor
  • ⁇ i rest is the resting value
  • ⁇ i mvc is the maximum voluntary contraction.
  • FIG. 3 is a flowchart showing the flow of learning processing.
  • step S101 the subject is asked to perform a stand-up motion, which is a target motion, in order to acquire learning data with a label.
  • the initial state of operation is the sitting state.
  • step S102 a sensor signal is obtained during execution of the operation.
  • the EMG signal is processed as described above.
  • step S103 labeling is performed. Labeling was done manually while viewing the time series data in action. At this time, based on the trunk velocity and the joint angles of the knees and hips, the time after the start of the standing motion was labeled as the standing motion.
  • step S104 in order to acquire unlabeled learning data, the subject is asked to stand up, cross their legs, stretch their hands, and sit back down.
  • step S105 the sensor signal during operation is acquired. Similar signal processing is applied to the EMG signal.
  • FIG. 4 is a flowchart showing the flow of operation support processing.
  • the control unit 30 acquires sensor signals from the sensor group 40 .
  • the motion determination unit 31 inputs a sensor signal to the discriminator obtained by the above learning process, and determines whether or not the current motion is a motion to be supported. If the current motion is the motion to be supported (S203-YES), in step S204, the support policy switching unit 32 sets the control policy ⁇ STS for supporting the motion to be supported (the rising motion) as the control policy. decide to use.
  • step S205 the support policy switching unit 32 determines to use another control policy ⁇ OTHER as the control policy.
  • step S ⁇ b>206 the command output unit 33 generates a desired torque control command from the sensor signal and the selected support policy, and outputs it to the drive unit 20 .
  • the stand-up motion is STS motion (Sit-to-Stand)
  • the leg-crossing motion is CL motion (Cross-Legs)
  • the hand-stretching motion is RM motion (Reaching Motion)
  • the re-sitting motion is RS motion (Re- seat).
  • Comparative Example 1 For comparison, classifiers were generated by the following two learning processes. In Comparative Example 1, only the sensor information obtained during the standing up motion three times was used as learning data. attached. In Comparative Example 2, learning is performed by assuming that all unlabeled data in this method are labeled with negative examples.
  • Graphs (a) to (d) in FIG. 5 respectively show each discrimination when the subject performs standing-up motion (STS), leg-crossing motion (CL), hand-stretching motion (RM), and re-sitting motion (RS). This is the result showing the probability that the device will output.
  • graph A is the output of the discriminator according to this method
  • graph B is the output of the discriminator according to Comparative Example 1
  • graph C is the output of the discriminator according to Comparative Example 2.
  • the thick line indicates the correct answer.
  • Comparative Example 1 the stand-up motion can be accurately determined, but other motions are erroneously determined to be the stand-up motion.
  • Comparative Example 2 it is not erroneously determined that the motion other than the rising motion is the rising motion, but even if the rising motion is performed, it cannot be correctly determined.
  • the motion that the subject is trying to perform can be determined with high accuracy regardless of the motion.
  • the following table shows the accuracy rate and F value (harmonic average of recall rate and precision rate) of the discrimination results obtained by performing 40 motions in total, 10 times each of the 4 types of motions for the subject. .
  • exoskeleton unit 20 drive unit 30: control unit 31: motion determination unit 32: support policy switching unit 33: command output unit 40: sensor group 41: angle sensor 42: torque sensor 43: EMG sensor 44: motion sensor

Landscapes

  • Health & Medical Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Mechanical Engineering (AREA)
  • Epidemiology (AREA)
  • Pain & Pain Management (AREA)
  • Robotics (AREA)
  • Rehabilitation Therapy (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Public Health (AREA)
  • Veterinary Medicine (AREA)
  • Manipulator (AREA)

Abstract

Appareil d'assistance au mouvement caractérisé en ce qu'il comprend une partie exosquelette qui comprend des parties d'articulation, une partie d'entraînement qui entraîne les parties d'articulation, une partie d'acquisition qui acquiert un signal de capteur qui représente un état des parties d'articulation et un état d'un utilisateur ; une partie de détermination qui détermine un mouvement visé par l'utilisateur à partir du signal de capteur ; et une partie de commande qui, en fonction du résultat de détermination par un dispositif de détermination, utilise une politique afin de générer à partir du signal de capteur une commande pour entraîner les parties d'articulation et émettre ensuite la commande, la partie de détermination étant formé par l'intermédiaire d'un apprentissage PU utilisant un ensemble de données de formation dans lesquelles, parmi les signaux de capteur obtenus en effectuant un mouvement cible et des mouvements autres que le mouvement cible une pluralité de fois, seule une partie des signaux de capteur obtenus à partir du mouvement cible reçoit une étiquette indicative positive, et les autres signaux de capteur ne sont pas étiquetés.
PCT/JP2022/048003 2022-01-12 2022-12-26 Appareil d'assistance au mouvement et procédé d'assistance au mouvement WO2023136122A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2022-003103 2022-01-12
JP2022003103 2022-01-12

Publications (1)

Publication Number Publication Date
WO2023136122A1 true WO2023136122A1 (fr) 2023-07-20

Family

ID=87279067

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/048003 WO2023136122A1 (fr) 2022-01-12 2022-12-26 Appareil d'assistance au mouvement et procédé d'assistance au mouvement

Country Status (1)

Country Link
WO (1) WO2023136122A1 (fr)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015199086A1 (fr) * 2014-06-23 2015-12-30 Cyberdyne株式会社 Système de reproduction de mouvement et dispositif de reproduction de mouvement
JP2018134724A (ja) * 2017-02-24 2018-08-30 ボッシュ株式会社 動作推定装置及び動作補助装置
JP2019080809A (ja) * 2017-10-31 2019-05-30 トヨタ自動車株式会社 状態推定装置
JP2021041522A (ja) * 2019-09-05 2021-03-18 株式会社国際電気通信基礎技術研究所 動作支援装置および動作支援方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015199086A1 (fr) * 2014-06-23 2015-12-30 Cyberdyne株式会社 Système de reproduction de mouvement et dispositif de reproduction de mouvement
JP2018134724A (ja) * 2017-02-24 2018-08-30 ボッシュ株式会社 動作推定装置及び動作補助装置
JP2019080809A (ja) * 2017-10-31 2019-05-30 トヨタ自動車株式会社 状態推定装置
JP2021041522A (ja) * 2019-09-05 2021-03-18 株式会社国際電気通信基礎技術研究所 動作支援装置および動作支援方法

Similar Documents

Publication Publication Date Title
Bi et al. A review on EMG-based motor intention prediction of continuous human upper limb motion for human-robot collaboration
US11478367B2 (en) Systems and methods for postural control of a multi-function prosthesis
Barsotti et al. A full upper limb robotic exoskeleton for reaching and grasping rehabilitation triggered by MI-BCI
Krausz et al. Intent prediction based on biomechanical coordination of EMG and vision-filtered gaze for end-point control of an arm prosthesis
CN106112985B (zh) 下肢助行机器的外骨骼混合控制系统及方法
Wang et al. Free-view, 3d gaze-guided, assistive robotic system for activities of daily living
Sierotowicz et al. EMG-driven machine learning control of a soft glove for grasping assistance and rehabilitation
CN111698969B (zh) 抓握辅助系统和方法
Agyeman et al. Design and implementation of a wearable device for motivating patients with upper and/or lower limb disability via gaming and home rehabilitation
CN111565680B (zh) 用于识别生物体信号所表示的信息的系统
Sattar et al. Real-time EMG signal processing with implementation of PID control for upper-limb prosthesis
EP3823562B1 (fr) Procédé pour permettre le mouvement d'objets, et appareil associé
Al Bakri et al. Intelligent exoskeleton for patients with paralysis
Cappello et al. Evaluation of wrist joint proprioception by means of a robotic device
D'Accolti et al. Online classification of transient EMG patterns for the control of the wrist and hand in a transradial prosthesis
Fleischer et al. Embedded control system for a powered leg exoskeleton
WO2023136122A1 (fr) Appareil d'assistance au mouvement et procédé d'assistance au mouvement
Baldi et al. Exploiting implicit kinematic kernel for controlling a wearable robotic extra-finger
Itadera et al. Impedance control based assistive mobility aid through online classification of user’s state
Rho et al. Multiple hand posture rehabilitation system using vision-based intention detection and soft-robotic glove
Risteiu et al. Study on ANN based upper limb exoskeleton
Woo et al. Machine learning based recognition of elements in lower-limb movement sequence for proactive control of exoskeletons to assist lifting
Miyake et al. Skeleton recognition-based motion generation and user emotion evaluation with in-home rehabilitation assistive humanoid robot
Zhou et al. Real-time multiple-channel shoulder EMG processing for a rehabilitative upper-limb exoskeleton motion control using ANN machine learning
Adhikari et al. Assist-as-needed controller to a task-based knee rehabilitation exoskeleton

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22920639

Country of ref document: EP

Kind code of ref document: A1